Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Math Input Devices

A New Sampling Algorithm Could Eliminate Sensor Saturation (scitechdaily.com) 135

Baron_Yam shared an article from Science Daily: Researchers from MIT and the Technical University of Munich have developed a new technique that could lead to cameras that can handle light of any intensity, and audio that doesn't skip or pop. Virtually any modern information-capture device -- such as a camera, audio recorder, or telephone -- has an analog-to-digital converter in it, a circuit that converts the fluctuating voltages of analog signals into strings of ones and zeroes. Almost all commercial analog-to-digital converters (ADCs), however, have voltage limits. If an incoming signal exceeds that limit, the ADC either cuts it off or flatlines at the maximum voltage. This phenomenon is familiar as the pops and skips of a "clipped" audio signal or as "saturation" in digital images -- when, for instance, a sky that looks blue to the naked eye shows up on-camera as a sheet of white.

Last week, at the International Conference on Sampling Theory and Applications, researchers from MIT and the Technical University of Munich presented a technique that they call unlimited sampling, which can accurately digitize signals whose voltage peaks are far beyond an ADC's voltage limit. The consequence could be cameras that capture all the gradations of color visible to the human eye, audio that doesn't skip, and medical and environmental sensors that can handle both long periods of low activity and the sudden signal spikes that are often the events of interest.

One of the paper's author's explains that "The idea is very simple. If you have a number that is too big to store in your computer memory, you can take the modulo of the number."
This discussion has been archived. No new comments can be posted.

A New Sampling Algorithm Could Eliminate Sensor Saturation

Comments Filter:
  • pure analong systems have been doing this for decades, let's bring back the vacuum tube

    • Yeah, that's coming in the iPhone 9. No 1/4 inch headphone jack. though.

    • pure analong systems have been doing this for decades, let's bring back the vacuum tube

      If you are talking about "limiting", those "algorithms", especially when implemented in analog hardware, have serious and inherent limitations as far as response times and "recovery" times, due to having "integration" in their "envelope-following" circuitry.

      This appears to be a sample-by-sample mathematical transform (and importantly, one that doesn't require the deadly "integration" that always imparts a time-delay in attack and release), that, through mathematical witchery, can accomplish dynamic range li

      • but that very issue is one part of the magic of overdriving vacuum tube amps for rock-n-roll, baby!

        • but that very issue is one part of the magic of overdriving vacuum tube amps for rock-n-roll, baby!

          Yeah, like the Aphex Anal Exciter. I read an article once in Modern Recording or somesuch that the "Aphex effect" was accidentally "discovered" when the "inventor" mis-wired a tube amplifier Kit he was building. His "mod" apparently starved some (or all) of the tubes for (IIRC) bias voltage. This created a peculiar (and pleasant) type of clipping/distortion (likely more like "crossover distortion", which is created when the waveform transitions between negative and positive phase, and the tubes kind of "fla

    • pure analong systems have been doing this for decades, let's bring back the vacuum tube

      You could have just said "I didn't read the article". It would have been shorter to write.

      • I did read the article; guess again, Squidward

        • I did read the article; guess again, Squidward

          Oh sorry. Maybe then you should lead with "I have no idea what I'm talking about".

          • you're the ignorant one who didn't understand when there was a joke, Squidward

            • you're the ignorant one who didn't understand when there was a joke, Squidward

              Oh ... he he.? Don't become a comedian. You suck at it*.

              *Not a heckle, just honest feedback.

    • by Anonymous Coward

      Vacuum tubes do not help with saturation. They heavily compress and become noisy when approaching saturation, as opposed to a transistor which will saturate suddenly. These guys may think they've found a way to store a number beyond the saturation point from the algorithm point of view, but how the electronic circuit would actually measure it is not detailed. An ADC measures something as a fraction of a reference voltage. Beyond that reference voltage, the transistors in the ADC circuit are saturated. If yo

      • The article isn't explicit enough.

        Their approach reads as if they're effectively (for example) taking a 12 bit conversion and throwing away the upper 4 bits, then inferring the discarded bits when doing reconstruction. There's something more complicated than just that going on, because they say the process runs away (diverges) if the inference is mistaken. The mistake is then corrected and the process would converge with the proper inference. That does, however, mean that several samples worth of output hav

    • pure analong systems have been doing this for decades

      The article is about an algorithm for analogue to digital converters. So what you are claiming is that you had pure analogue, analogue to digital converters? Replacing silicon transistors with valves does not change the fact that the circuit is still digital. Valve computers were still digital computers, they just used a different switching technology.

      • by Khyber ( 864651 ) <techkitsune@gmail.com> on Sunday July 23, 2017 @12:56AM (#54860871) Homepage Journal

        "So what you are claiming is that you had pure analogue, analogue to digital converters?"
        "Replacing silicon transistors with valves does not change the fact that the circuit is still digital."

        All circuits are analog. Period. That's the physics of it. 'Digital' is just a sampled section of the signal measured against a reference voltage. Those are still both analog waveforms or sections thereof.

        It's like people suddenly forgot the bare fucking basics and physics of basic electronics when the world went digital. You dipshits fell hook line and sinker for the digital marketing hype.

        • All circuits are analog. Period. That's the physics of it.

          If you are going to get that pedantic then no you are wrong in two ways. First go look up quantum mechanics and then know that this governs how semi-conductors work. These devices transition between two, binary states in a non-analogue way smeared out a little by thermal effects. However, the end result of this is that they allow a certain amount of charge to pass which is either above some threshold or not and so we treat it as a one or zero.

          Hence the circuit is digital because we define our own, artif

          • by Khyber ( 864651 )

            "First go look up quantum mechanics and then know that this governs how semi-conductors work"

            I build raw LED dies from the base silicon wafer up with vacuum chemical vapor deposition, I know damn well how semiconductors work.

            Also, we have algortihms that reconstruct the full analog waveform of say light, from actual frequency right down to the very direction it travels and where it came from (see Lytro cameras.)

            We don't need discrete signals. We only use them because it is easier to control.

        • Digital in this context means that a signal is represented by an integer. Please stop trying to confuse people.
      • no, I was just making a joke referencing a certain characteristic of vaccum tube amplifiers, 007

    • I don't have the space for a analong system. Guess I'll have to settle for anashort.
  • You mean, you take a value that is outside the range of the sensor and convert it into a lower value (normalization anyone?) that can be worked with!! My God man, it will be a revolution!!!!

    • Re: (Score:3, Informative)

      by Anonymous Coward

      No, it's not normalization. From a preliminary reading, they're just doing rudimentary frequency analysis to provide qualifications under which modular representations can be inversely mapped to a real world Voltage reading, i.e. a low-enough-energy high frequency component such that an extremely high to extremely low (or vice-versa) transition can be interpreted unambiguously as bounds clipping rather than a transition within the typical dynamic range of the device. That's why they're taking the sampling t

      • Nothing mind-blowing

        Go back and do more than a preliminary reading. They are utilizing a property of a specific type of ADC (a so-called set-reset ADC or SR-ADC), which instead of saturating reverts to the lower bound value when the input voltage exceeds the upper bound, and vice versa.

        I admit that I don't understand their algorithm, however they were able to reconstruct a random signal with a maximum amplitude exceeding 20 times the ADC upper bound. The mean squared error between the original and constructed signals was 1.5 x

        • Re:What genius!! (Score:4, Informative)

          by green is the enemy ( 3021751 ) on Saturday July 22, 2017 @04:30PM (#54859181)
          I'm an EE. This concept is interesting to me, but then I'm left wondering how they really tackle the problem of signal limits. It's not just that ADC that limits the signal. The amplifiers in the chain also do it. Maybe I should just read about it. The whole self-resetting ADC concept just strikes me as odd. I have a feeling it was invented to improve the dynamic range or sampling rate or reduce the power usage of ADCs, but not to magically sample arbitrarily large signals.
          • Also an EE, greetings brother/sister!

            Many of the same things intrigued me as well. The answer is that they don't tackle the issue of signal limits. What this shows is that IF a signal reaches the ADC that is out of bounds, it is possible to reconstruct using their algorithm. Obviously all the practicalities of signal processing still apply.

            What I wonder is, say you have a 5V ADC. Using their technique, could you drive a 15V (max) signal into the ADC and effectively triple your resolution? You're still using

            • by ceoyoyo ( 59147 )

              As far as I can tell, they've done some math to put bounds on how fast the signal can change (how high a frequency can be present) to allow reconstruction. That there is a limit is pretty intuitive, since if you signal changes slowly enough (relative to your sampling frequency) then you can easily identify and count each reset.

              It seems to me that would work fine in things like audio recorders, but won't work so well in things like cameras.

              • As far as I can tell, they've done some math to put bounds on how fast the signal can change (how high a frequency can be present) to allow reconstruction.

                That has already existed for a long time, it's called the Nyquist Frequency, and it's half the sampling frequency. Or rather, you must sample at a rate at least double the highest frequency you want to measure.

                • by ceoyoyo ( 59147 )

                  No, it's not quite the same thing, although the concepts are related. In this case, reconstructing your signal is still limited by the Nyqvist frequency of course, but your ability to reconstruct across the ADC reset discontinuities also requires that the amplitude doesn't change so fast that you get more than one wrap per sample period.

                  I suspect it actually is the Nyqvist limit, but applied in this weird phase, er, amplitude unwrapping situation.

                  • by Khyber ( 864651 )

                    This isn't using Nyquist at all I suspect.

                    To convert into guitarist language, this is effectively detecting a phase harmonic and measuring the saturated amplitude, and doing a hard-clip compression on it so the 'data' is essentially recreated.

                    We did something akin to that as a digital photography experiment back in high school photography electives.

            • What I wonder is, say you have a 5V ADC. Using their technique, could you drive a 15V (max) signal into the ADC and effectively triple your resolution? You're still using all your bits to measure a 5V range... so if that's the case then it truly is quite groundbreaking.

              It may be groundbreaking, but not for the reason advertised in the paper/article/summary. From a quick look at this paper [ieee.org], ADC power dissipation is proportional to f * 2^(2*n), where f is the sampling rate and n is the number of bits per sample. High performance ADCs are constrained by power dissipation, which limits either sampling rate or resolution. What these guys are probably trying to do is constrain n. By allowing signals larger than the ADC input range, and then unwrapping them in software, they inc

            • by Entrope ( 68843 )

              Their technique reportedly lets you recover a 15V p-p signal with a 5V p-p ADC, but you lose most of your measurement bandwidth. Their example used pi*e oversampling on top of the usual Nyquist-Shannon factor of two. In practice, unless you are dealing with a signal that has a tiny bandwidth but huge dynamic range, I think you would do better to scale your input signal down and use a more standard ADC.

              (I will also echo the criticism that they did this all with only a simulation, not with actual hardware,

          • I'm an EE. This concept is interesting to me, but then I'm left wondering how they really tackle the problem of signal limits. It's not just that ADC that limits the signal. The amplifiers in the chain also do it. Maybe I should just read about it. The whole self-resetting ADC concept just strikes me as odd. I have a feeling it was invented to improve the dynamic range or sampling rate or reduce the power usage of ADCs, but not to magically sample arbitrarily large signals.

            I grew up and built stereos, hi-fi's, tuners, etc, in the diode,triode,pentode era. As the gain increased (pentodes), so did the "white noise". The noise of electrons escaping the from the cathode and getting past the control grids.
            We had to purchase, at relatively high cost, selected triodes and power tubes, that were hand selected for their low levels of white noise.

            How will "white noise be handled?

            • Modern semiconductors are very good, and at least for audio it's possible to get components and design circuits with less noise than any microphone.

              The subject of noise in tubes and semiconductors is interesting. Tubes have several factors that contribute to their noise, one of which is the heat of the cathode, which yields an effective "noise temperature". The cathode-caused noise temperature is reduced by the first grid (the space charge between the cathode and the first grid tends to smooth the electron

        • by Agripa ( 139780 )

          Folding type ADCs have been around as practical implementation since at least 1970s and while they do what the authors want, the folding process is internal so they saturate just like most other ADCs. Support for external folding means adding another stage (or increasing the resolution of an existing stage) to increase the resolution of the ADC and if you are going to do that, why not just accept the full resolution instead of throwing bits away and trying to reconstruct the input? Modern and ubiquitous p

    • They need a good name for it. Something that shows that the values have wrapped around, but in a good way. So something like wraparound but with a good connotation. They should definitely call it "reacharound".

  • This is what phase unwrapping algorithms do. THey have their own flaws too.

  • by Anonymous Coward on Saturday July 22, 2017 @12:51PM (#54858383)

    1. Audio clipping is present in purely analog recording systems (an playback) so not an ADC problem.

    2. The sensor, any sensor, has physical limits, that will cause saturation (i.e. clipping) regardless of the cleverness of the ADC downstream.

    3. In most cases it is easier to devise an ADC with enough bits (i.e. precision and dyanmic range) large than the sensorr it is connected to

    Summary: a solution in search of a problem.

    • by dgatwood ( 11270 )

      Not sure who modded this "troll", but the AC is quite correct.

      Image sensors have a number of physical limits other than the ADC. It's easy to get a bigger ADC. Most image sensors only use 14–16 bits these days, whereas 24-bit ADCs are readily available. There's room for improving the dynamic range of the ADC portion of image sensors by more than two orders of magnitude (256x) with relative ease. Camera manufacturers haven't bothered to do so, however, largely because the primary limitations in i

    • Why is this modded troll? It's completely correct. There's no new sampling algorithm that can prevent clipping at the sensor level. That's a physical issue.

    • The idea of folding ADC's is to reduce the complexity of an ADC. The result however is potential data loss, and this article proves what conditions are necessary to recreate the original waveform from the samples. (See this for example [berkeley.edu] for an explanation of ADC complexity and ways to simplify it.)
  • by Anonymous Coward

    On first glance this seems to be equivalent to Delta Encoding [wikipedia.org].
    If your delta is guaranteed to be less than 2^X, you can encode data of any range using X+1 bits (one for the sign).

  • by johannesg ( 664142 ) on Saturday July 22, 2017 @01:09PM (#54858475)

    It's a different type of ADC, one that resets when it reaches saturation. So you can forget about using this 'new algorithm' in your existing equipment.

    • It's both. It utilizes the unique properties of these ADCs, but there is absolutely an algorithm. You stopped reading the paper too early.

  • For the last couple of decades 24 bit A/D converters have been used to digitize the output of seismometers so we don't do this anymore. Previously 16 bits was pretty much all you could get so if the input signal got too high there would be circuitry to reduce the voltage into the A/D to keep it from saturating.
    • More bits provide greater resolution, not increased range. Saturation will always be saturation. There is no math that will change this simple fact, and someone needs to bitch slap these idiots for (further) sullying the MIT name.
      • Using an exponential scale gives you an increased range with the same number of bits, varying resolution in parts of the range (ie, a fixed amount of meaningful digits rather than a fixed absolute accuracy).. It's easy to do so in a way that can represent any conceivable real input. And if even that is not enough, you can use bignums.

      • If you take the time to look at the paper, you'll notice much of the beginning is dedicated to the properties of a unique type of ADC, which their algorithm utilizes.

        • They can design any ADC they want. A saturated transducer will be saturated regardless. No need to read the paper. GIGO ... It isn't just for software.
      • by ceoyoyo ( 59147 )

        If you've got more bits you change the gain on your amplifier to translate that into more range.

        Saturation is saturation, which is irrelevant if you're using a type of ADC that resets instead of saturating.

  • I built a sampler once that simply filled a big capacitor, used it to repeatedly fill and discharge a much smaller one while incrementing a counter each time until it can't. Rinse and repeat. The counts are then representative of the voltage. The only difference I see is that they measured the remainder when the big cap drops below the capacity of the small one. Right?

    If so, you get an odd effect where sampling frequency is indirectly related to the amplitude of the sample.

    Why not just drop the saturation p

    • There are several problems. A counter is required for each pixel, and each counter will be incremented anywhere from hundreds to tens of thousands of times per exposure. That's a lot of activity on the sensor chip, which means heat and noise. For a 4000x3000 image sensor, there'd be 12 million wires running from pixels to counters. That's a chip construction problem, serious enough that it might make such a chip impossible. Conventional image sensors use analog shifting or multiplexing, resulting in wire co

      • I was thinking a lot more out of the box. Basically, throw away the idea of frames at the chip level. Also, throw away the idea of knowing the intensity of light at a point. The intensity is represented by the number of times the pixel cycles in a given time. And set the threshold for firing as low as you can (it will be limited in how low it can be set by the speed of whatever output scheme is chosen).

        For example, in vision applications, an approach that might work is to feed each pixel into an input of a

  • by dgatwood ( 11270 ) on Saturday July 22, 2017 @01:36PM (#54858571) Homepage Journal

    This is an interesting approach, and it would work pretty well for things like audio. It might help with the dynamic range of cameras when used at higher ISO settings, but it will not solve the problem by any means. The problem, though, is that in modern cameras, the sensor's pixels also have a maximum capacity, called the full well capacity. The sensor can't physically accumulate more of a charge than its full well capacity, and the DAC is designed so that its clipping point matches the full well capacity of the sensor at its base ISO. So you would still get clipping when the brightness exceeds what would otherwise by the sensor's clipping point at its base ISO, and if it is already at its base ISO, this wouldn't make any difference at all.

    IMO, a better approach (which I proposed several years ago) is to sample the sensor and physically cancel out (subtract) the measured charge in the sensor itself, doing this multiple times per exposure to ensure that you don't hit the full well capacity. That approach also has the advantage of letting you do really cool time-based manipulation of the resulting photo. For example, you could do vector-based motion compensation of the individual subframes to dramatically reduce motion blur, compensate for some amount of camera shake, etc.

    Even better, if you represent subsequent subframes relative to the previous subframe (e.g. -12 here, +2 there), you'll also usually get a high percentage of zeroes, which means you should be able to losslessly compress the additional subframes to be pretty small on average, potentially giving you the ability to adjust the image motion compensation after the fact to get an image in which motion is blurred more or less, according to taste.

    In theory, you could even do bizarre, per-region motion compensation, such as making a baseball appear to be motionless while the bat is swinging at a high speed or vice versa. :-D But I digress.

    • by Zorpheus ( 857617 ) on Saturday July 22, 2017 @03:24PM (#54859003)
      So you pretty much want to read out the sensor at a higher framerate, and combine multiple images to one. This means that the sensor must be capable of a much higher framerate. And the image quality might get worse due to the readout noise, but I don't know if this is relevant in normal, uncooled cameras.
      • by Pieroxy ( 222434 )

        Isn't that what HDR is ?

        • Yes and no. HDR is just a term to describe more dynamic range than you can capture. The traditional HDR process relies on different images with different exposure times, short to capture bright data without saturation, and long to bring faint data above the noise floor.

          The GP is talking about combining lots of identical exposures and then mathematically combining them to reduce the noise floor. This is used a lot in astronomy, and yes those pictures could be described as having "high dynamic range".

      • Image quality is a bit worse, but read-out noise is statistically random and will across multiple iterations be reduced as the data is combined into a final image.

    • by Arkh89 ( 2870391 ) on Saturday July 22, 2017 @03:30PM (#54859021)

      And every time you read out a sub-frame you are penalized by the read noise... after accumulation of the variances, you end-up with an extremely noisy image. If you want to do that you don't just need a very good quantum efficiency (the probability of a incident photon to be absorbed and to release an electron) you need an almost perfect read-out circuitry (if you want to operate without cooling). Eric Fossum has proposed a "Quanta" binary sensor which would do this with a ~0.15e- RMS read-out noise which has to be compared with the 1.5+e- of the best sensors used in consumer applications today.

      • The readout noise is statistically quite random. This process is used in astronomy precisely toe *reduce* the noise floor.

        However you did touch on one thing: read-out circuitry. One of the problems comes in the speed of reading out the data. The better the readout quality the slower it is normally performed on the sensor. By reading out multiple times a millisecond you're going to severely impact performance.

        • by Arkh89 ( 2870391 )

          No, in astronomy you are interested in reducing the noise for the equivalent of a sub-length. This means that if you combine say 100 images of 5 minutes, the result should be better in terms of noise (and thus DR) than a single 5 minutes exposure. Here we are interested in a totally different normalization which consists in deciding the total number of sub-frames dividing the total exposure (500 minutes with the previous analogy). For a simple stochastic sensor model, the smallest number of sub-frames (1) w

    • IMO, a better approach (which I proposed several years ago) is to sample the sensor and physically cancel out (subtract) the measured charge in the sensor itself, doing this multiple times per exposure to ensure that you don't hit the full well capacity. That approach also has the advantage of letting you do really cool time-based manipulation of the resulting photo. For example, you could do vector-based motion compensation of the individual subframes to dramatically reduce motion blur, compensate for some amount of camera shake, etc.

      What you're proposing also has incredible limitations in terms of noise floor and statistical methods. This process is already used in astronomy where integration times are really long. Problem is, in terms of linearity, signal noise ratio, and every other metric other than saturated photocells you get a large quality hit in comparison to capturing the data in one go.

      • by ceoyoyo ( 59147 )

        But you gain the ability to correct motion or other artifacts, and increase dynamic range, which is why the technique is used in astrophotography.

    • It sounds like you're looking for something that already exists, albeit in specialized usage: the Digital Focal Plane Array [mit.edu], where each pixel has processing circuitry below (or beside) it. It does things like on-sensor motion compensation and integration and very high bit depth integration, even on a shaking platform and with low absolute pixel count. This lets you do things like make a near-1Hz near-gigapixel image from a 640x480 sensor [mit.edu] and other interesting things.
  • First, pre-scalers have been around forever. You just drop one sample, adjust scale and interpolate the missing sample. Easy and effective. And second, no, you _cannot_ take the modulo of an analog signal. All your analog parts still need to be able to cope with the full signal amplitude or _they_ will clip. And guess what? A pre-scaler is an analog part.

    This is one more instance of no-understanding bad tech reporting, nothing else.

    • Try reading the paper. It has nothing to do with what you're talking about.

  • by Anonymous Coward

    The authors do not describe how to actually construct a circuit using this sampling technique. It would not be possible for DC coupled ADCs. I don't know how you would solve the problem of going beyond the power rails where transistors cease to operate normally

  • Good audio converters are 24 bit these days.
    That means you can have 20bits worth of dynamic range (~120dB) with 4 bits worth of clipping headroom (24dB).
    That will exceed whatever is plugged into the A/D.

    High end cameras are using 14bit converters (uber high end has hit 16), so the problem is pretty much solved there too.

    • This right here. Audio recording abilities are good enough to essentially discern the sound of a person quietly breathing while a military jet is taking off next to you with full afterburner. We didn't even have practical dynamic range limits in the audio world back in the CD days. Our ears just aren't that good at picking differences in dynamic range. When blood is pouring out of our ears we won't be complaining about not being able to hear someone breathing.

      The problem does exist with digital imaging thou

  • by labnet ( 457441 ) on Saturday July 22, 2017 @04:25PM (#54859175)

    I've skip read the Paper and /. comments, and this reads like mathematical wank by guys that have never touched an oscilloscope.

    First, they are waving their hands in the are about a magic 'resetting ADC'... seriously...
    Do they even know what reset means? It has to be performed at the hardware level, It has to performed with DC offsetting (from a D/A converter), it has to be performed to 1 least significant bit of accuracy, and the input signal has to be rate limited. No way this will happen for any practical systems without adding artefacts when the offsetting circuitry tries to slew the input within one sample period.

    The only real world way I can think of, that still retains DC accuracy, is servoing the input.
    This is where a 'counteracting' force is used to subtract from the input... but servoing has hairs all over it, as it has to be super accurate in terms of amplitude and frequency response.

    They should have talked to an electrical engineer before spouting off this rubbish.

    • by ceoyoyo ( 59147 )

      Ha ha... there's a cartoon on the window of a lab down the hall, showing how to get a paper accepted in an IEEE journal. You start with something like 1 = 1 and end up with a page of math.

      I've personally gotten this review back from IEEE TSP: "too many words, not enough math."

    • by jwdb ( 526327 )

      Well, one of the parts of the paper that you missed in your "skip reading" is that resetting ADCs have been theorized since the 70's and have physically existed since the 2000's.

      From the article:

      Physical realizations only started to develop in the early 2000â(TM)s. Depending on the community, the resulting ADC constructions are known as folding-ADC (cf. [27] and references therein) or the self-reset-ADC, recently proposed by Rhee and Joo [28] in context of CMOS imagers (see Fig. I-(b1-b3) for visualiza

      • by labnet ( 457441 )

        Well, jwdb. I'm happy to be criticized (and I did read that part in the paper, but I've never seen one).
        Educate me and link me to a data sheet of one of these 'resetting adcs'

        • by jwdb ( 526327 )

          You've never seen one, so they don't exist?

          There's at least two citations of work on the implementation of resetting ADCs in the paper you read. Google them.

  • Everying old become new again when rediscovered. Here's an old patent from a former co-worker on an ADC that performs this analog adjustment bit-by-bit to create a flash ADC. https://www.google.us/patents/... [google.us] The precision of such ADC's depend upon having deadly accurate 2^N analog values. If you can create a deadly accurate 2x amplification, you can cascade an series of identical stages to build an ADC.

    • by Anonymous Coward

      No, this paper describes a means to take a clipped signal from an existing resetting ADC and recover the original signal *without* having any knowledge of how many times it had to reset to record it - thus providing unlimited dynamic range. (For a suitable time-sampled signal).

      • Theoretically, any perfectly band-limited signal, perfectly sampled at least the number of times specified by Nyquist, can be reconstructed perfectly. The samples need not be uniform in time, and the signal may exceed the range of the ADC at times when it is not being sampled. The "only" problems are practical ones: the calculation burden is immense, and errors in timing and voltage measurement are greatly magnified if the samples are clumped so that relatively large periods are unmeasured.

        This would provid

  • by Spaham ( 634471 ) on Sunday July 23, 2017 @07:27PM (#54864085)

    Oh you mean they turned the dial to 11 ?

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...