You're welcome, Rex.
Every kind of image is just a data file, either in raw format or as a recognisable image file (.JPG, .TIF, etc).
Regardless of what file type, each colour pixel in the Bayer array is represented by a bit value in the resulting file. In an 8 bit processed file (e.g. an out of camera .JPG), that means that each pixel can have one of 256 values (2^8 = 0-255). In a 16 bit file (colour space), each pixel can have one of 65,536 values (2^16 = 0-65,535). This means that moving a colour value by one bit value has far less impact on a 16 bit file than on an 8 bit file (this is grossly and crassly simplified, but the point made is still correct).
Many cameras also use 12 bits to represent raw file values, some use fewer bits, some use lossy compression, some use 14 or 16 bits (medium format cameras usually use 16 bit raw files).
Regardless, when the raw file is converted into an image file, the bit values are mapped into either an 8 bit colour space - irrecoverably crunched/flattened/lost data - or expanded into a 16 bit colour number. The latter preserves far more data than using an 8 bit editing workflow.
A 12 bit raw file contains 2^12 = 4096 values for each of the four channels - each part of a Bayer array is made up of 4 pixels - a red, two different greens and a blue pixel, usually represented as RGBG, but there are some really weird colour filter arrays out there! My 2003 Nikon Coolpix 5000 used CMYG, or similar.
More possible bit values for a given pixel translates into more editing latitude and better colour differentiation, more subtle editing is usually the result.
The colour space determines how large a colour gamut these numbers can represent.
Always remember that you can always compress the bit depth of a file, but you cannot expand it! The same goes for wide to narrow colour spaces.
In practical terms, an 8 bit sRGB .JPG generated from a 16 bit aRGB colour space raw will look much better than an OoC sRGB JPG - trust me!