Well, sort of. When you read the sensor pixel you get a value in the range 0 to 2^n-1 where n is the maximum bit depth. If more photons are received the value stays at 2^n-1 (just as you can't raise the water level in a bucket once the bucket is full). When you amplify the reading part of this range is mapped onto 0 ... 2^n-1 by a linear transformation. x2 if twice the native ISO is set, x4 if 4 times etc. If the resulting value exceeds the limit the maximum value is set ... so there is more clipping at high ISO than at native ISO. Finally the value is transformed by the gamma function if shadow fill in is set - this gives a higher multiple at low values. If the image is then converted to jpeg, after compression the resulting values are linearly mapped to the range 0 ... 255 since jpeg understands only 8 bit depth. And photons are always detected or not detected - you can't detect half a photon. The number of photons you detect at a pixel, if you repeat the process many times and ignoring thermal noise etc, will be a Poisson distribution centred on the average ... the deviation from the average in any particular exposure is what causes quantum noise, which therefor increases as the ISO is turned up (because the deviation is multiplied along with the raw value). This is a considerable simplification because of offsets, thermal noise etc. plus variation in sensitivity between individual pixels. For a full analysis, look up AIP (as mentioned above).