Light units

Light is one of the basic and greatest natural phenomena, vital not only for life on this planet, but
also very important for the technical advancement and ingenuity of the human mind in the visual communication areas, we will discuss the aspects that affect video signals and CCTV

The major “problem” scientists face when researching light is that it acts as dual nature: it behaves as, though it is a wave – through the effects of refraction and reflection  – but it also appears as though it has particle nature – through the well-known photo-effect discovered by Heinrich Hertz in the nineteenth century and explained by Albert Einstein in 1905. As a result, the latest trends in physics are to accept
light as a phenomenon of a “dual nature.” so in explaining the concepts of lenses
we will be using, most of the time, the wave theory of light. On the other, the principles imaging chips operation (CCD or CMOS), for example, based on the light’s particle (material) behavior.

Human eye vs. Camera comparison

The brain’s “image processing section” concentrates on 30°, although we see best at around 10°. This processing is further supported with the constant eye movement in all directions, which is equivalent to a pan/tilt head assembly in CCTV.
For a single lens reflex (SLR) camera the standard angle of view of 30° is achieved with a 50 mm lens,

or a 2/3” camera this is a 16 mm lens, for a 1/2” camera a 12 mm lens, and for a 1/3” camera an 8 mm
lens. In other words, images of any type of camera, taken with their corresponding standard lenses, will be of a very similar size and perspective as when seen through our eyes.

Lenses shorter in focal length give a wider angle of view and are called wide-angle lenses. Lenses
with longer focal length narrow the view, and therefore they look as if they are bringing distant objects closer, hence the name telephoto (“tele” meaning distant). Another matter of interest associated with
CCTV is that by knowing the focal length of the eye and the maximum iris opening of approximately 6 mm

 

Optics in CCTV

 

The basic concept we have to understand is the concept of refraction and reflection

When a light ray traveling through air or a vacuum enters a denser medium, like glass or water,

it reduces its speed by a factor n (always bigger than 1) known as the index of refraction

Different media (which are transparent to light) have different indices of refraction. For example, the speed of light in air is 300,000 km/s and almost the same as in vacuum. If a light ray enters glass, for example, which has an index of 1.5, the speed is reduced to 200,000 km/s

If a light ray enters the glass perpendicularly, the wavelength of the light ray shortens, but when the ray exits the glass it resumes to normal speed, that is, returns to the original “air wavelength” and continues its travel in the same direction.

CTF and MTF

What we want from a lens is sharp and clear images, free of distortions.

Resolution refers to the lens’s ability to reproduce fine details. In order to measure this ability, a
chart that consists of black and white stripes with various density (spatial periods) is used. This is usually expressed in lines per millimeter (lines/mm). When counting how many lines/mm a lens can resolve, we count both black and white lines.
A characteristic that shows the “response” of a lens to various densities of lines/mm is called a Contrast Transfer Function (CTF).

 

CCD Camera

                      Charge Coupled Device camera uses photoelectric effect , it  explains how light energy (photons) are converted into electrical signal in the form of free electrons. The base material of an imaging chip is silicon, a semiconductor that “decides” it’s conductive state based on electric field around it.

              The basic principle of CCD operation is the storing of the information of electrical charges in the elementary cells and then, when required, shifting these charges to the output stage. Then, these electron charges are shifted out vertically and/or horizontally by use of voltage pulses

             The quantity of the electrons in these mobile packets can vary widely, depending on the applied voltage and on the capacitance of the storage elements. The amount of electric charge in each packet can represent information

The construction of CCD sensors can be in the form of either a line (linear CCD) or a two-dimensional

matrix (array CCD)

Linear sensors are used in applications where there is only one direction of movement by the object, such as with facsimile machines, image scanners, and aerial scanning.  

two-dimensional sensor chip may have over 120 megapixels; its only limited by the silicon wafer size.

 

How charges are transferred in a CCD sensor

The quality of a signal as produced by the CCD chip depends also on the pulses used for transferring charges. These pulses are generated by an internal master clock crystal oscillator in the camera

The drawing below shows how video signal sync pulses are created using the master clock. This frequency depends on many factors, but mostly on the number of pixels the CCD chip has, the type of charge transfer (FT, IT, or FIT), as well as the number of phases used for each elementary shifting of charges

the camera’s crystal oscillator needs to have a frequency at least a few times higher than the signal bandwidth that a camera produces. All other syncs, as well as transfer pulses, are derived from this master frequency. The drawing above shows conceptually how this charge transfer is performed with the three-phase shift on the CCD sensor. The pulses indicated with q1, q2 , and q3 are low-voltage pulses (usually between 0 and 5 V DC), which explains why CCD cameras have no need for high voltage, as was the case with the tube cameras

CMOS cameras

 A variation of CCD technology called CMOS (standing for Complementary Metal Oxide Semiconductor, as explained previously) is becoming more popular these days

The CMOS image sensors was easier and cheaper to produce, but could not deliver the pixel uniformity and density as CCD. The possibility, however, to add micro-electronic components next to each pixel was an extremely attractive option offering some new perspectives. Even converting analog signal to digital at the imaging sensor itself was a possibility. In addition, CMOS power consumption was much lower than the CCD. These reasons were sufficient to get imager manufacturers keep searching for improved CMOS technologies. CMOS imagers sense light in the same way as CCD,

but from the point of sensing, everything is
different. The charge packets are not transferred, but they are instead detected as early as possible by charge-sensing amplimers, which are made from CMOS transistors. In
some CMOS sensors, amplimers are implemented at the top of each column of pixels – the pixels themselves contain just one transistor which is used as a charge gate, switching the contents of the pixel to the charge amplimers. These passive pixel CMOS sensors operate like analog dynamic random-access memory (DRAM).
Conceptually, the weak point of CMOS sensors is the problem of matching the multiple different amplimers within each sensor. Some manufacturers have overcome
this problem by reducing the residual level of fixed-pattern noise to insignificant proportions. With the initial CMOS designs and prototype cameras, there were
problems with low-quality, noisy images that made the technology somewhat questionable for commercial applications. Chip process variations produce a
slightly different response in each pixel, which shows up in the image as noise

In addition, the amount of chip area available to collect light is also smaller than that for CCDs, making these devices less sensitive to light.
Essentially, both types of sensors are solid state and convert light into electric charge

and process it into electronic video signals. In a CCD sensor, every pixel’s charge, in form of electron current, is transferred through output
nodes, then converted to voltage, and sent off-chip as an analog signal. All of the pixel area in CCD can be devoted to light capture, and the output’s uniformity is high. In a CMOS sensor however, each pixel has its own charge-to-voltage conversion, and the
sensor often includes electronic components such as amplimers, noise-correction, and digitization circuits, so that the chip may output digital bits rather than just analog values. With each pixel doing its own conversion, uniformity is lower. In addition, the micro-
components added around the pixels take up space too, thus reducing the CMOS light sensing area. But the advantage is that the imaging sensor can be built to require less off-chip circuitry for basic operation. This is especially critical with the clock requirement on
CCDs, needing multiple nonstandard supply voltages and high power
consumption . Wi t h C M O S technology this is much simpler and
less power demanding

Spectral response of imaging sensors
The spectral sensitivity of imaging sensors varies slightly with various silicon substrates, but the genera characteristic is a result of the photoelectric effect phenomenon: longer wavelengths generate more electrons into the sensor’s silicon structure. How many electrons a packet of photons can generate in imaging electronics is called Quantum efficiency  of the sensor. In layman’s terms, this refers to higher
sensitivity of the sensor to the red and infrared light. A typical and generalized imaging sensor spectral response curve, compared to human eye, is shown on the drawing below.

color camera needs to “see” visible light with similar spectral
sensitivity as human eyes. Only then can produce colors as we see them. So, although
the imaging sensors can pick up infrared light and colors by their design, we need to remove such wavelengths in order to see colors. This is done by use of so-called optical infrared cut (IRC)These filters are optically precise plan-parallel pieces of glass, mounted on top of the imaging sensor. As the name suggests, they
behave as optical low-pass filters, where the cutting frequency is near 700 nm, that is, near the color red

Color cameras must use an IRC filter if we are to see colors.
If the sensor needs to see infrared light, like the ones we often use in very low-light CCTV surveillance, we must remove the IRC filter. This is typically done by mechanically removing it from the top of the sensor. When IRC is removed, not only does the sensor “see” the infrared light, but it increases its low light performance response, making the camera appear more sensitive to low light. Of course, if IRC is removed color cannot be reproduced

color cameras without having the IRC filter removed cannot see infrared light. Cameras stated to be Day/Night cameras would usually need to have a removable IRC filter. This function requires some sort of electromechanical mechanism.

Seeing colors


A color camera can produce RGB (Red, Green, Blue) colors with two basic methods

optical split prism and three separate sensors

-colors sub-pixels (referred to as Bayer Mosaic filter) with one sensor only

each pixel is basically divided into four quadrants, where green color takes two diagonally opposite quadrants, the red and blue the remaining two. Now, clearly, the pixels and the subdivisions are not sensing different wavelengths of colors differently. In other
words, is wrong to assume that there are red, green, optical filters that pass only certain potion of wavelengths, where the primary colors are red, green, and blue. The filters spectral response is shown in the diagram “Typical spectral sensitivity of a color
imaging sensor” on the previous page. The reason there are two green quadrants in each pixel is simply because the human eye is most sensitive to the green color, and the green color actually carries most of the luminance information in an image.
It should be noted that the light-sensitive pixels are of the same silicon structure and are not different.
 New developments are continually improving the CCD (and the new CMOS) imaging technology, and one of them is the multi-layered single-chip color developed by
Foveon Inc. Instead of having pixels for each primary color separately

           They have invented a layering technique where colors are separated in depth, as they penetrate on the same pixels. The result is better color reproduction and higher resolution.

White balance

From color cameras we require, apart from the resolution and minimum illumination, a good and
accurate color reproduction In modern cameras, we have a Through – The- Lens automatic white balancing (TTL-AWB).

This is achieved by putting a white piece of paper in front of the camera and then turning the camera on. This store correction factors in the camera’s memory, which are then used
to modify all other colors. In a way, this depends very much on the color temperature of the light source upon power up, in the area where the camera is mounted.

Although the majority of cameras today have AWB, most models will have manual white balance (MWB) adjustments. In MWB cameras there are usually at least two settings (switch selectable): indoor and outdoor. Indoor is usually set for a light source with a color temperature of around 2800° K to 3200° K, while the outdoor is usually around 5600° K to 6500° K. These correspond to average indoor
and outdoor light situations. Some simpler cameras, however, may have potentiometers accessible from the outside of the camera for continuous adjustment

Newer design color cameras have automatic tracking white balance (ATW), which continually adjusts (tracks) the color balance as the camera’s position or light changes. This is especially practical for PTZ cameras and/or areas where there is a mix of natural and artificial light.
In a CCTV system where pan and tilt head assemblies are used, it is possible while panning for a camera to come across areas with different color temperature lights,

Note: Color temperature is a parameter describing the color of a visible light source by comparing it to the color of light emitted by an idealized opaque, non-reflective body. The temperature of the ideal emitter that matches the color most closely is defined as the color temperature of the original visible light source. Color temperature is usually measured in kelvins

 

Types of charge transfer in CCDs

CCD sensors, as used in CCTV, can be divided into three groups based on the techniques of charge transfer. The very first design, dating from the early 1970s, is known as frame transfer (FT), engineers have invented a new way of transference called
interline transfer (IT)

The exposed picture during vertical sync pulse period  is transferred to left masked area column, Another interesting benefit of IT design is the possibility of implementing an electronic shutter in the CCD design. This is an especially important feature, where the natural exposure time of 20 ms (1/50 s) for PAL, 17 ms (1/60 s) for NTSC, can be electronically controlled and reduced to whatever shutter speed is necessary, still producing a 1 V pp video signal. This is perhaps one of the most important
benefits from the interline transfer design, which is why it is widely used in CCTV

The best CCD design so far is the so-called frame interline transfer (FIT), offering all the features of the interline transfer plus even less smear and a better S/N ratio

With all these fine tune-ups, FIT CD sensors have a very high dynamic range, low smear, and
high S/N ratio, we should point out that no matter how good the camera electronics is, if the source of information – the CCD sensor – is of an inferior quality, the camera will be inferior too. most of the handful of sensor manufacturers have CCD products of the
same type divided into a few different categories, depending on the pixel quality and the amount of possible dead pixels. Different camera manufacturers may use different categories of the same sensor

CCD chip as a sampler

The CCD sensor used in CCTV is a two-dimensional matrix of picture elements
(pixels). The resolution that such a pixel-matrix produces depends on the number of pixels and the lens resolution. Since the latter is usually higher (or at least it should be) than the resolution of the imaging sensor, we tend to not consider the optical resolution as a bottleneck. However, as mentioned in the heading on MTF, lenses are made with a resolution suitable for a certain image size, and care should be taken to use the appropriate optics with various chip sizes. This is especially critical with the HD
sensors, or sensors of higher megapixel count.

CCD and CMOS sensors have discrete pixels and therefore the information contained in one TV line is composed of discrete values from each pixel. These discrete values do not represent digital information (as some may think) but rather discrete analog samples of the lens projected image. In a way, the image sensors are optical samplers. As with other samplers (like in music), we do not get the total information of each line, but
only discrete values at positions equivalent to the pixel positions.

Correlated double sampling (CDS)
The noise in an imaging sensor chip has several sources. The most significant is the thermally generated noise, but a considerable amount can be generated by the impurities of the semiconductors and the quality of manufacture. High noise reduces the image sensor’s dynamic range, which in turn degrades image quality

A careful CCD device design and fabrication can minimize the noise, although it cannot eliminate it completely. Also, low operating temperature can reduce thermally generated noise. Unfortunately, in
practical CCTV systems used in surveillance, the user rarely has control over these parameters.

Camera specifications and their meanings

The basic objective of the television camera is, to capture images, break them up
into a series of still frames and lines, then transmit and display them onto a screen rapidly so that the human eye perceives them as motion pictures

Comparison tests are quite often the best (and probably the only) objective way to check camera quality, such as resolution, smear, noise, or sensitivity.

It is a fact that the general impression of a good quality picture is not based on just one factor, but on a combination of many attributes – resolution, smear, sensitivity, noise, gamma, and so on

We will go through some of the most important features:
‡ Camera sensitivity
‡ Minimum illumination
‡ Camera resolution
‡ S/N ratio
‡ Dynamic range



Camera Sensitivity

Sensitivity is represented by the minimum iris opening (maximum F-stop) that produces a full 1Vpp (1 V peak-to-peak) video signal of a gray scale test chart, when that same test chart is illuminated with 2000 lx from a 3200° K color temperature of the source

The test chart used to check camera’s sensitivity has to have a gray scale with tones from black to white, and an overall reaction coefficient of 90% for the white portion of the gray scale. One of the standard test charts used for such purposes is the EIA gray scale chart shown below in the middle

the manual iris lens is closed just until the white peak level starts dropping
from 700 mV, relative to the blanking level. The value obtained from the lens’s iris setting, like F-4 or F-5.6, represents the camera sensitivity. The higher the number is, the more sensitive the camera is.

Minimum illumination

It refers to the lowest possible light at the object at which a chosen camera gives a recognizable video signal. It is therefore expressed with luxes at the object, at which such a signal is obtained

Camera resolution

. It is one of the most frequently quoted parameters of a camera or complete system. When talking about resolution of a complete system (camera-transmission-recording-monitor), the most important part is the input (i.e., the camera resolution). There are vertical and horizontal resolution, and they are measured using a test chart.
Vertical resolution is the maximum number of horizontal lines that a camera is capable of resolving.
This number is limited to 625 horizontal lines by the CCIR/PAL standard, and to 525 by the EIA/NTSC recommendations. The real vertical resolution (in both cases), however, is less than these numbers. If we consider the active lines, the vertical sync pulses, equalization lines, and so on, the maximum for vertical resolution will be 576 lines for CCIR/PAL and 480 for EIA/NTSC. This is the theoretical maximum for analog cameras. In reality, this needs to be further corrected by the Kell factor of 0.7,
to get the maximum realistic vertical resolution of 400 TV lines for CCIR/PAL

similar deduction can be applied to the EIA/NTSC signal, where the maximum realistic vertical resolution s 330 TV lines.

Horizontal resolution is the maximum number of vertical lines that a camera is capable of resolving. This number is limited only by the sensor technology and the monitor quality. PAL analog cameras typically have a maximum (theoretical) horizontal resolution of 576 TV lines. This is because we are dealing with 576 active lines, and the aspect ratio of analog is 4:3, which yields 768 horizontal pixels, because 576 is 3/4 of 768. The above fits a camera with sensor that has “square pixels,” and this is the case with most analog sensors. So, the maximum horizontal resolution of analog CCD/CMOS
cameras is usually 75% (4/3) of the number of horizontal pixels on the sensor and it is expressed in TV lines. Clearly, if a lens is not of matching quality, and when the Kell factor is applied, this
theoretical maximum is going to be lower. When counting vertical lines in order to determine horizontal resolution, we count only the horizontal width equivalent to the vertical height of the monitor. The idea behind this is to have square pixels. So, when we describe analog camera resolution, we always refer to horizontal resolution as TV lines (TVL) and not just line

There is an exception to the above rule, and that is for imaging sensors that may not necessarily have square pixels. Sony and Panasonic. Namely, they have produced sensors with
over 970 pixels in horizontal direction and 576 in vertical (when PAL is used) or 480 (when NTSC is used). With such sensors it is possible to achieve higher horizontal resolution where pixels are no longer square, but rather rectangular (squashed in the horizontal direction). The theoretical horizontal resolution limit would be, again, 4/3 of the 976, which yields theoretical maximum horizontal resolution of around 730 TVL. In reality, manufacturers quote 650 TVL (and some more), which is still outstanding. This of course, will depend heavily on a good quality lens and good illumination.

Resolution
should never be measured in low light as it will always show much lesser values due to noise masking the fine details.

With the appearance of digital HD and Megapixel cameras, the concept of resolution changes slightly. We no longer have only 4:3 aspect ratio sensors, and we are no longer limited
to the number of scanning line of the analogue system (576 for PAL and 480 for NTSC). So, in effect, although it is possible to use the same definition of TVL as in analog video, it is more practical to simply state the horizontal and vertical pixels of the sensor view

For this reason, It was  a designed a complete new test chart, first of its kind in our industry, which can be used to check both formats, be that with a 4:3 (typically SD or MP) or 16:9 aspect ratio (typically HD). The only additional thing to consider when dealing with digital cameras is the compression artefacts. So, if stated that a sensor/camera is a 1080p high definition, for example, it doesn’t necessarily mean that the resolution measured will be exactly 1920 x 1080 pixels. In fact, it is always going to be lower because there will be optical limitations if the lens is not adequate for such a sensor and, more importantly there will be compression artifacts which will reduce resolution appearance (that is,
when IP cameras with compression are used, which will be most of the time).More details on what and how can be measured with this test chart will follow

should be noted that when measuring HD or MP cameras, the resolution is better expressed only as lines, rather than TV lines, or even more appropriate with just pixels. This allows to discuss and  measure megapixel images of various aspect ratios and formats.

Signal/noise ratio (S/N)

The S/N ratio in a CCD camera is defined as the ratio between the signal and the noise produced
by the sensor combined with the camera electronics The signal to noise (S/N) ratio is an expression that shows how good a camera signal can be, especially in lower light levels. Noise cannot be avoided but only minimized. It depends mostly on the imaging sensor quality, the size of the sensor pixels, the electronics and the external electromagnetic influences, but also very much on the temperature. The camera’s metal enclosure (if, indeed, itis a metal one) offers significant protection from external electromagnetic i influences. Internal noise sources include both passive and active components of the camera, their quality and circuit design; noise
depends very much on the temperature. This is why, when stating the S/N ratio, a camera manufacturer should indicate the temperature at which this measurement is taken. If this is not done it is typically assumed to be room temperature.
digital CCTV, noisy pictures reduce the compression efficiency and increase storage space used, as most encoders see random noise as video detail

The units for expressing ratios (including the S/N) are called decibels and are written as dB.

The general formula for voltage and current ratios is:
S/N = 20 log(Vs /Vn) (41)
where: Vs is the signal voltage and Vn is the noise voltage. Current values are used when a current ratio
needs to be shown.
If a power ratio is the purpose of a comparison, the formula is a little bit different:
S/N = 10 log(P1/P2)

Practically, a S/N ratio of more than 48 dB is considered good for an analog CCTV camera.



Dynamic range of imaging sensors

The dynamic range of an imaging sensor is defined as the maximum signal charge (saturation
exposure) divided by the total RMS (root-mean-square) noise equivalent exposure. DR is similar to S/N ratio, but it only refers to the sensor dynamics when handling low to bright objects in one scene. While the S/N ratio refers to the complete signal including the camera electronics and is expressed in dB, the DR is a pure ratio number, but it can also be expressed in dB, i.e. as a logarithm of the ratio.

The dynamic range is limited by the pixel size (which is always getting smaller and smaller) and the noise levels (which can never be eliminated). There are always pixel imperfections, which are called fixed pattern noise, and, in addition, there are always thermally generated electrons in normal circumstances. This exist on any temperature above absolute zero (-273ÛC). A small pixel element can only contain so many electrons created by light photons. The DR actually shows the light range an imaging sensor can handle – only this light range is not expressed with the photometric units but with the generated electrical signal. It starts from the very low light levels, equal to the imaging sensor RMS noise, and goes up to the saturation levels. Since this is a
normalized ratio of two values of electrons, it is a pure number, usually in the order of thousands. The dynamic range of a good solid-state imaging device 3,000~6,000:1. Converted in engineering terminology, this is equivalent to 70~80 dB. This number
is even smaller for sensors with smaller pixels. In comparison, human eyes see anything from full moon-light levels (0.1 lux) up to full sunny beach (100,000 lux), and this is a ratio of 1,000,000 :1, or, in engineering terminology, 120 dB

When saturation levels are reached during a sensor exposure (1/50 s for PAL, or 1/60 s for NTSC), the blooming effect may become apparent when excessive light saturates not only the picture elements (pixels) on which it falls but the adjacent ones as well. As a result, the camera reduces the resolution and detail information of the bright areas. To
solve this problem, a special anti-blooming section is designed in most CCD chips

This section limits the amount of charges that can be collected in any pixel. When anti-blooming
is designed properly, no pixel can accumulate more charges than what the shift registers

can transfer.

So, even if the dynamic range of such a signal is limited, no details are lost
in the bright areas of the image. This may be extremely important in difficult lighting
conditions such as looking at car headlights or looking at people in a hallway against light in the background.

Some camera makers have introduced a special design that blocks the oversaturated
areas during the digital signal processing stage. The AGC circuitry then does not see
extremely bright areas as a white peak reference point so that much lower levels are taken as white peaks, thus making the details in the dark more recognizable

Others are using new methods of imaging sensor operation where, instead of having one
field exposure every field time, two exposures are done during this period. One at a very short time and the other at the normal time. Then, the two exposures are combined in one frame so that bright
areas are exposed with short exposure duration giving details in the very bright areas, and the darker areas are exposed with the lower speed giving details in the dimmer part of the same picture. The overall effect is of the dynamic range of the camera being increased a number of times. Some manufacturers call this wide “dynamic range” (WDR), others “super-dynamic effect.”

Another interesting design was made possible with the introduction of the CMOS technology. Pixim made a sensor where electronic-iris was possible on a local pixel level in the sensor. When excessive light is projected in one area, the electronic-iris would shorten the exposure in that area only, leaving the rest of the pixels to be exposed at the longer electronic- iris times, allowing more details in the shadow areas of the picture. This
was simply impossible with the CCD technology.

Special low-light-intensified cameras

Modern imaging sensors can see in as low light as human eyes can see. Certainly, this can be further improved by using extreme AGC (although this will increase the noise). Most important factor in how good a camera can see in low light, as already explained under the “Minimum illumination”

Intensified cameras have an additional element, called a OLJKW  LQWHQVL¿HU, that is usually installed
between the lens and the camera. The light intensifier is basically a tube that converts the very low light, undetectable by the CCD chip, to a light level that can be seen by it.

First, the lens projects the low light level image onto a special faceplate that acts as an electronic multiplying device, where literally every single photon of light information is amplified to a considerable signal size. The amplification is done by an avalanche effect of the electrons, which light photons produce when attracted to a high-voltage
static field. The resultant electrons hit the phosphor coating at the end of the intensifier tube, causing the phosphor to glow, thus producing visible light (in the same manner as when an electron beam produces light onto a B/W CRT). This now visible image is then projected onto the CCD chip, and that is how a very low light level object is seen by the
camera



Thermal imaging cameras

The imaging sensors are very sensitive to the infrared spectrum, which is an unwanted phenomenon for a typical CCTV camera working with visible light. The infrared sensitivity, however, is very useful and promoted in thermal imaging cameras. These are cameras that have developed very ¿ne resolution of determining temperature
difference of the object with high precision, even lower than 0.1Û, and going as high as temperatures of 1200ÛC. The highlighting of the temperature difference is the most important tool in the usage of the thermal imaging cameras. Typically hotter areas are shown in red, and colder in blue, with the temperature variations between these extremes changing through yellow and green.

Thermal imaging cameras can be used to detect intruders in total darkness for example, without having to use infrared illuminators. More importantly, thermal imaging cameras
can see through night as through day, but also through fog and rain. Fire-fighters use thermal imaging cameras to see the possible source of ¿re. Also, detecting parts in an engine, for example, that heats up more than it should, can easily be picked by thermal imaging cameras. Electrical installation and fuse boxes can be checked for excessive current draw with thermal imaging cameras. The video output of thermal imaging cameras don’t have to have very high resolution, and typical sensors would have
320 x 240 pixels, but 640 x 480 is not uncommon. The video output format can be in the form of analog PAL or NTSC, or digital stream as M-JPG, MPEG-4 orH-264. As such, thermal imaging signals can be recorded on any DVR as normal video signa

Some manufacturers even put two cameras on one PT head, for example, one being a thermal
imaging camera and the other standard camera, so that when monitoring or surveying an area both types of video streams are available.

 

Multi-sensor panoramic camera

, CCTV manufacturer Dallmeier produced a new type of camera, a multi-sensor Panomera. This is a camera made up of multiple sensors, able to produce panoramic
images of large areas with exceptional clarity. The most accurate description of this technology would be as a multi-sensor and multi-focal panoramic camera system with live views. The Panomera is, in essence, an array of megapixel cameras with optically and mechanically perfectly aligned views so that although they are separate sensors; they make a continuous panoramic view of a larger area.
There are some important differences between this concept and a typical single-sensor megapixel camera.
The first is that the Panomera can show incredible detail by digital zooming in, without pixelation.
Also, because each sensor is using a very efficient H.264 video compression, Panomera is able to show live and playback images with minimum bandwidth requirements. The current maximum number of sensors is 17 sensors in one enclosure, making it effectively a 68 MP camera, since each sensor works
in the 4 MP mode. The streaming bandwidth requirement is still less than 100 Mb/s of data traffic. The viewing experience is live motion in every section of the panoramic view, no matter how zoomed it is, and no matter if it is live or playback.

All camera sensors produce multicast streams, which means they are available to multiple operators simultaneously without increasing the streaming bandwidth or putting a load on the encoders. Each operator can concentrate on their own section of surveillance of the same panoramic view without losing or affecting the other operator

Interestingly, Panomera is not limited to a standard 16:9 or 4:3 aspect ratio view,
but it can have any combination of sen-sors to cover an area most efficiently.
This could even be wide horizontal strip monitoring an airport tarmac, a stadium
with thousands of spectators, or even a vertical strip monitoring tall buildings,
elevators, opened lift shafts, or anything that is awkward to be seen with a horizon-
tal 16:9 aspect ratio HD cameras.

Scroll to Top