About us What we do Associations Membership Machinery Finder Exhibitions Resource hub News & Publications Automate BEST Contact us
background image
background image
background image
background image
background image
background image
background image
background image
background image
background image
Become a member arrow right
Stay up to date

Please provide a valid email address

Please select one or more contact preferences

Sign up

Championing image processing technology in British manufacturing.

Take a closer look at vision technology.

High speed cameras. Image processing. Line scanning. Read up on all the latest vision technology innovations, right here. Just click on the links below for a comprehensive summary.

High-Speed Cameras

plus minus

Exposure time

Exposure time is the amount of time that light is allowed to fall onto the camera sensor. Longer exposure times allow the sensor to gather more light, but this leads to more noise being generated on the sensor. Standard vision cameras mostly specify the maximum exposure time to avoid noise becoming an issue. Short exposure times are needed when imaging a fast-moving scene to avoid motion blur and typically the exposure time should be short enough so that the object moves by less than one pixel. Consider an object moving at 100mm/sec with an area of 100mm x 100mm to be imaged. Using a camera with a resolution of 1K x 1K pixels, each pixel will be imaging an area of 0.1mm. In 1 second, the object will have moved by 1000 pixels which will require a camera capable of exposure times of 1/1000th second or faster to avoid motion blur.

Frame rates

The frame rate, or number of complete images from an area scan camera that can be output in a particular time is important. For example, a production line where objects are passing by at a rate of 20 units per second will require a camera capable of capturing 20 discrete frames per second. Frame rates are quoted at the full resolution of the particular camera sensor, but many cameras offer the ability to partially scan the sensor or sample a discrete portion of the sensor, allowing much higher frame rates for that area. This can be useful if the full frame is not required for imaging. A technique known as ‘binning’ can also increase frame rates. Binning combines the output of adjacent pixels on a sensor and this also results in increased sensitivity and S/N but decreased spatial resolution.

Line scan imaging

Line scan cameras are used extensively in high-speed imaging. In general, a single line of pixels is scanned at high speed and the frame is built up by the motion of the object past the camera. The size of the object to be imaged and the speed of movement determine the line rate required in the camera. Line scan cameras have shorter exposure times and therefore require greater illumination levels.

UKIVA members can offer further advice on the different camera formats and technology.

Image Processing

plus minus

All industrial vision systems require an element of machine vision software, be it for just camera control or to complete a bespoke application. For many industrial inspection needs, an easy-to-configure machine vision development environment with simple user interfaces allows for the most cost-effective solutions to be deployed. For more bespoke requirements companies with good software development skills often use a machine vision software library.

With sophisticated image processing and measurement capabilities and simple point-and-click user interfaces, vision systems have their role in the automation process, but they also provide a powerful link with robotics.

The term ‘Image Processing Software’ is very broad as it can be applied to commercially available photo manipulation packages as used by photographers. Using image processing software, photos and other image types can be manipulated to improve an image for example, or remove unwanted components or artefacts - think of imperfections on a model’s otherwise perfect face - so ‘airbrushing’ is a form of image processing.

In the context of machine vision, image processing is used to enhance, filter, mask and analyse images. Well-known machine vision software packages such as Common Vision Blox, Scorpion Vision Software, Halcon, Matrox Imaging Library or Cognex VisionPro as well as others who are applications that run on Microsoft Windows and are used to create advanced and powerful automation software that will take an image input and output data based on the content of the image. Ultimately, in commercial machine vision, image processing is used to classify, read characters, recognise shapes or measure.

Industrial Cameras

plus minus

Cameras are present every day of our lives; Walking down the street the ‘CCTV’ on buildings is commonplace, driving where ‘speed/safety’ cameras are evident and every modern model phone has a built-in camera. It is obvious these devices exist, they are an integral part of our lives today. However, very few people are aware that there are hundreds of cameras being used behind the scenes in manufacturing plants worldwide. These industrial cameras, also known as machine vision cameras, are being used to inspect a vast range of products, from headache tablets to shampoo bottles to spark plugs. The list goes on and covers industries like automotive, pharmaceutical, food and beverage, electronics, print and packaging.

So, what is an industrial camera? It is a camera which has been designed to high standards with repeatable performance and robust enough to withstand the demands of harsh industrial environments. These are commonly referred to as machine vision cameras as they are used in manufacturing processes for inspection/quality control.

Machine vision cameras typically conform to a defined standard such as Firewire, GigE Vision, CameraLink, USB and Coaxpress. The purposes of these standards are to facilitate ease of integration and to ensure future flexibility for camera upgrades.

There are two main types of camera; area scan and line scan.

An area scan camera is a CCD/CMOS sensor in a 2D matrix of pixels. This results in an image consisting of pixels in the X and Y direction (a normal-looking image as taken by your mobile phone). Industrial area scan cameras run from 10s to 100s of frames per second.

With a line scan camera, the CCD/CMOS sensor typically contains only a single row of pixels. This means that the object to be captured must be moved under the line scan camera to ‘build’ a 2D matrix image. Line scan cameras run from 100s to 1000s of images per second and are ideal for ‘web’ applications where products are being manufactured continuously such as paper and textiles and/or where the products are large.

Machine vision cameras can be combined with illumination, optics, image processing software and robots to create fully automated inspection solutions.

UKIVA members can offer further advice on the different camera formats and technology.

Smart Cameras

plus minus

Smart cameras combine the sensor, processor and I/O of a vision system in a compact housing, often no bigger than a standard industrial camera, so that all of the image processing is carried out on board.

By combining all these items in a single package, costs are minimised. These systems are ideal where only one inspection view is required or where no local display or user control is needed.

Many smart cameras are offered with additional extension products such as expanded I/O, image display and control interfaces.

3D Image Methods

plus minus

Laser profiling

Laser profiling using triangulation is one of the most popular 3D techniques. The object to be measured passes through a line of laser light and a camera mounted at a known angle to the laser records the resulting changing profile of the laser line. These 3D profiles deliver high measurement resolution with a good measurement range. They produce a point cloud that when projected onto a designated plane creates a depth map that is conveniently analysed using well-known 2D vision tools like blob analysis, pattern recognition and optical character recognition. This technique relies on the object moving relative to the laser line so this configuration is particularly popular on production and packing lines where the product moves on a conveyor. The system can be configured using individual laser sources and cameras, or integrated systems where the source and camera are housed in a single enclosure. Care must be taken to avoid shadowing, where higher regions of the object block the view of the laser line so that data from the structures behind cannot be obtained. One solution is the use of several cameras which track the laser line from different angles and then merge the different data sets to a single height profile using sophisticated software tools.

Stereo imaging

Another common 3D method mimics nature by using a binocular stereo set-up where two cameras are used to record 2D images of an object. A 3D image can then be calculated using triangulation. This technology also allows for the movement of the objects to be measured during recording. A random static illumination pattern can be used to add arbitrary texture to plain surfaces and objects that do not have the natural edges (texture) information that the stereo reconstruction algorithms require. This technology has proved very successful in applications such as volumetric measurements and robot bin-picking. Some systems are available which utilise line scan cameras instead of area scan cameras and are particularly useful for fast-moving objects or web applications. Photometric stereo uses several images to reconstruct the object's surface. Here a single camera and the object are fixed, while the scene is illuminated from different known orientations taken in consecutive images. This method gives only relative height measurements, making it an excellent choice for 3D surface inspection.

Fringe projection

Light stripe projection requires static objects. Here, the whole surface of the sample is acquired at once by projecting a stripe pattern to the surface, typically at an angle of 30˚ and recording the resulting image with a camera perpendicular to the surface. The large number of points acquired simultaneously gives height resolution up to two orders of magnitude better than with laser profiling. The measuring area can be scaled from less than a millimetre up to more than one metre so suits small as well as large samples.

Time of flight

Time of flight cameras measure the time taken for a light pulse to reach the object and return for each image point. Since the time is directly proportional

UKIVA members can offer further advice of the different camera formats and technology.

3D Robot Vision

plus minus

Automation is a key factor in improving productivity and competitiveness in world markets and the use of 3D vision to guide robots (pick and place) is key in maximising this competitiveness, particularly in the automotive and pharmaceutical industries, where 100% inspection is critical. Using 3D robot vision to pick unordered parts enables manufacturers to save a lot of time and resources shiſting or organising parts in the manufacturing process or feeding robots and machines with parts.

The challenge lies in acquiring images in 3D, building a mathematical model analysing the position of something in 3D space and then transmitting 3D picking coordinates to a robot, all in just a few seconds to meet the cycle time of the robot and avoid it having to wait for the next set of coordinates. Fortunately, complex 3D images do not necessarily have to be created to achieve this. It is possible to do this using stereo vision imaging techniques, where features are extracted from 2D images that are calibrated in 3D.

As a rule of thumb, if there are a minimum of four recognisable features on an object, it is possible to create 3D measurements of the object and therefore generate the X, Y and Z coordinates of any part of the object, with a level of accuracy that allows the robot to grip it without causing any damage. If, however, there are not enough features, or no features at all that can be used for the 3D calibration, features can be ‘created’ using laser lines or dots to illuminate the area.

A good example of this is the 3D de-palletising of sacks, which could contain anything from concrete to grain or tea. As the sacks are rather featureless, the whole pallet is illuminated with lasers and the laser lines are located in 2D images. The sacks are also recognised in the 2D images and all the information is combined to get 3D picking data - all well within the cycle time of the robot. So most of the work is done in 2D, with far fewer pixels to process, yet a high level of accuracy is maintained due to the lens and camera calibration that can achieve sub-pixel measurements.

UKIVA members can offer further advice on the different camera formats and technology.

Image Acquisition and Processing

plus minus

Illumination

Light Emitting Diodes (LEDs) are a popular form of illumination for machine vision applications, offering a good deal of control. They can be readily pulsed, or strobed to capture images of objects moving past the camera at high speeds. Strobing needs to be synchronised with the objects to be inspected so the camera is triggered at the same moment as the pulse of light. The short exposure times required for high-speed imaging mean that high light intensities are required. It is possible to dramatically increase the LED intensity over short exposure times by temporarily increasing the current beyond the rated maximum using lighting controllers. However, the LED must be allowed to cool between pulses to avoid heat damage. Lighting controllers can provide fine adjustment of the pulse timing, which is oſten more flexible than the camera’s timing. The camera can be then set for a longer exposure time and the light pulsed on for a short time to ‘freeze’ the motion.

Triggering

High-speed imaging requires that the exposure of the camera happens exactly when the object is in the correct position. Initiating the start of an exposure at a particular time is called triggering. If a camera is free running, the position of the moving object could be anywhere in each capture frame or even completely absent from some frames. Triggering delivers image acquisition at a precise time. The frequency of a trigger should not exceed the maximum frame rate of the camera to avoid over-triggering. This also means that the exposure time cannot be greater than the inverse value of the image sequence. The exposure is generally triggered by an external source such as a PLC with a simple optical sensor often used to detect when the object is in the correct position. Precise triggering is very important for high-speed imaging and in very high-speed applications great care must be taken to assess and reduce all of the factors that can influence any delays from initiating a signal to the resultant action in the sensor to ensure the required image is acquired. These factors could include opto isolators in cameras as well as the latency and jitter within the imaging hardware.

Data capture & storage

High frame rates and high spatial resolution generate high volumes of data for processing. Image data is generally transferred directly to a PC’s system memory or hard disk. This relies on an appropriate interface speed between the camera and the computer and the speed of the computer. There are several vision image data transfer standards such as GigE Vision, Camera Link, Camera Link HS, USB 3 Vision and CoaXPress which generally offer a trade-off between data transfer rates and the allowable distance between the camera and the PC. One of these interfaces offers an acceptable data transfer rate for the application and long sequences are required, this is a good solution. The alternative is to have the image recording memory within the camera itself, which increases data throughput significantly since images are held in the camera without any need for transmission while recording. However, the amount of onboard memory is significantly less than a PC hard drive, which means that only relatively short sequences can be recorded.

UKIVA members can offer further advice on the different camera formats and technology.

Line Scan Technology

plus minus

The changing face of web inspection

Materials produced in continuous rolls (web) or sheets, such as paper, textiles, film, foil, plastics, metals, glass, or coatings are generally inspected using line scan technology to detect and identify defects to avoid defective material being sent to customers or added-value downstream processes. Like so many areas of machine vision camera technology, line scan imaging has seen some significant developments in recent years, which not only benefits web inspection but other line scan imaging applications as well.

Line scan basics

Line scan technology involves building up an image, one line at a time, using a linear sensor. For web inspection and many other machine vision applications, the object passes under the sensor, typically on a conveyor belt. Applications involving rotating cylindrical objects or where the camera moves relative to the object are also possible. Although linear sensors have similar pixel sizes to the sensors used in area scan cameras, the line lengths can be much greater. Instead of the 1-2K width typical in most megapixel area scan sensors, a line scan sensor can have up to 16K pixels. This means that for a given field of view, a line scan camera will provide a far higher resolution, and line scan technology makes it possible to capture images of wide objects at a single pass. High scan speeds for linear arrays mean that the amount of light falling on individual pixels is often lower than in area scan applications so consideration must be given to overcoming this.

New developments in line scan technology

Both CCD and CMOS linear sensors have been used in line scan cameras for many years, but developments in CMOS technology driven by the mobile phone market have also led to significant benefits in industrial imaging sensors. Recent developments have included the introduction of 16K pixel sensors, simultaneous RGB and NIR imaging, higher line speeds, larger pixel variants for enhanced sensitivity, lower cost systems, enhanced software and the use of newer data transmission standards such as CameraLink HS and CoaXPress. Another interesting development has been the emergence of contact image sensors (previously found in photocopiers and scanners) as a viable alternative to line scan cameras for industrial applications. Also, some area scan cameras offer a line trigger mode for use in some line scan applications.

UKIVA members can offer further advice on the different camera formats and technology.

Camera Developments

plus minus

Resolution and speed

CMOS sensor developments have allowed increases in pixel resolution to 16K and line speeds up to 140kHz. More pixels and higher line speeds generate more data – a 16K sensor operating at 120kHz line rate produces 2 GBytes/s of data which necessitates camera/frame grabber combinations utilising the new generation of camera interfaces such as CameraLink HS and CoaXPress. Pixel size and number determine the length of the sensor. By making use of binning techniques and FGPA, resolution and effective pixel size can be adjusted in a single camera to optimise between resolution and sensitivity. For wide web inspection applications, multiple line scan cameras may be needed to cover the entire width.

Improved sensitivity

Line scan cameras generally have short pixel exposure times and require more illumination than area scan cameras. Since higher line rates bring even shorter pixel exposure times, sensors frequently use dual line technology with two rows of pixels scanning each line on the sample, improving the Signal to Noise (S/N). Time Delay Integration sensors offer multiple integration stages, giving substantial S/N enhancement. Typically, line scan pixel sizes range from 3.5 to 14μm square, but a new range of single line scan cameras features 20μm square pixels, with a 2K CMOS sensor capable of operating up to 80kHz. The larger pixel size gives better signal-to-noise ratio for a given exposure level, and higher line speeds than smaller pixel systems at the same exposure level.

Colour and multispectral imaging

Three sensor colour imaging in line scan cameras allow the collection of independent RGB information. Prism systems collect light from a single line and split it spectrally onto three sensors. Trilinear sensors collect the RGB components from three separate lines. These lines need to be physically separated to accommodate the necessary electronic structure.

A cost-effective alternative is a bi-linear detector with no line gap that uses colour filters similar to the Bayer arrangement used in area scan cameras. In another recent development quadlinear and prism based four sensor cameras are now available to provide NIR outputs as well as RGB for multispectral imaging. This enhances imaging possibilities for a wide range of applications, including print, banknote inspection, electronics manufacturing, food and material sorting.

Contact Image Sensors

plus minus

Contact image sensors are an interesting alternative to line scan cameras for the inspection of flat products such as textiles, foils, glass, wood and other web-like materials for defects. Other applications include PCB, solder paste and packaging inspections as well as print inspection and high-end document scanning. They offer high data rates as well as high sensitivity and simple set-up. Contact image sensors use the same concept as used in fax machines and desktop scanners. They include a sensor and lens with pixels being mapped 1:1 to the object, with a working distance from a few mm up to around 12mm. This means the sensor has to be as big as the item being imaged, but has the advantage that distortion found in traditional lens/line scan camera combinations is removed. They are available with and without integral LED light sources.

The sensor head generally features a lens array using gradient index rod lenses. Because these lenses are graded, they do not suffer from any variation in their refractive index. Each lens captures an image of a very small region of the target, and thanks to the small overlap in the captured images, a clear, sharp quasi-telecentric image is produced along the narrow line of the sensor head, with remarkable image uniformity. This is particularly important in applications such as high-value print inspections such as on banknotes, passports etc., which may contain holograms. These are particularly susceptible to the angle of light entering them, so the virtually telecentric structure of the contact image sensor is well suited to these applications.

Compact image sensors can be combined to offer extended lengths and provide similar features to line scan cameras in terms of dark current, peak response non-uniformity and dynamic range, but without the trade-offs concerning spatial resolution and light efficiency. Contact image sensor heads can use CMOS or CCD sensors as detectors. There is a choice of pixel layouts from monochrome sensors to colour versions using alternating coloured pixels or tri-linear sensors. Resolutions up to 600 dpi are available with scan speeds up to 1.8m/s for monochrome sensors. Image data output is generally provided via standard industrial CameraLink interfaces.

UKIVA members can offer further advice on the different camera formats and technology.

Illumination, Optics and Processing

plus minus

Illumination

The shorter pixel exposure times for line scan cameras compared to area scan cameras generally mean that line scan applications require a greater level of illumination. Since line scan applications only require imaging of one line on the sample, line light illumination systems are usually used. High-intensity LED line lights provide long lifetimes, and consistent, stable, light output along the entire length of the light. Line lights are available for both front and back lighting, with bright field and dark field illumination being the most popular choices for front illumination depending on the material being imaged. LEDs also offer a choice of wavelengths. The light unit can effectively be made of any length and intensity. However the higher the intensity the more expensive this option becomes because of the heat generated and the heat sinking needed to dissipate this. The use of enhanced sensitivity sensors helps reduce the intensity of lighting required.

Exposure control

A line scan image is produced from the relative movement of the sample and the camera. Synchronisation of the movement between the object and the camera is required to ensure that there is no distortion in the image. This is usually achieved by setting up a line trigger signal from an encoder signal from the sample movement method (typically a conveyor belt), to ensure that the scanned lines are synchronous with the movement of the object. The camera will collect light between these trigger signals, so if the movement speed varies the image brightness will also vary. To ensure constant image brightness exposure control is needed. This can either be set up on the camera itself or by controlling the illumination intensity.

Lenses

The sensor length is a function of the number and size of the pixels it contains, the more pixels there are and the larger they are, the longer the sensor will be and this has a direct influence on the size of the camera lens. For sensors with a line length of more than 20mm, the use of traditional C-mount lenses becomes problematic, since there is a significantly different viewing angle at the ends of the sensor and the phenomenon known as vignetting comes into play with resulting intensity variations towards the outside of the lens. The solution is to use F-mount lenses with a larger image circle diameter, but this adds to the cost of the optics. Alternatively, a sensor with smaller pixels, and hence a shorter line length could be used, but this may require increased illumination. A uniform viewing angle can be obtained using telecentric lenses, but these again add size and cost to the installation. Thus careful thought must be given to the imaging and resolution requirements to get the optimum choice of sensor and lens.

Image processing

The major image processing toolkits provide all of the tools necessary for inspecting continuous webs. These offer the facilities to find and classify defects such as cracks, tears, knots, and holes, find colour variations or perform critical dimensional measurements at the high speeds needed. Other capabilities include code reading, robot guidance for cutting, trimming, or shaping and communication with other 3rd party equipment such as PLCs, HMIs, and remote storage. For other line scan applications where effectively an area scan image is produced, many of the off-the-shelf software packages can be used, www.ukiva.org as well as specially developed software for print inspection.

UKIVA members can offer further advice on the different camera formats and technology.

Technology

plus minus

There are many ways in which vision technology can be used in end-of-line applications.

Off-the-shelf vision systems

These are frequently ‘smart’ cameras which can be set up at the end of a production line by the customer’s production engineers. These are particularly appropriate for single inspection applications. Smart cameras combine image capture, processing and measurement in a single housing and output the results from the analysis over industry standard connections. They can be used for high-volume component inspection, 1D and 2D code reading and verification, optical character recognition etc. For solely code reading applications, dedicated high-speed code readers also featuring integrated lighting, camera, processing, software and communications are available.

More complex systems

Where multiple inspections are required (for example, where the same object may need to be viewed from different directions), the use of multiple smart cameras may not be the most cost-effective. Using multiple cameras controlled by a single PC may offer a better solution and these types of systems can generally be set up and installed with the help of the manufacturers or vision component distributors.

Major integration projects

Challenging end-of-line inspection applications (or indeed any in-line inspection), where the installation set-up is complex, or a complete turnkey solution including product reconciliation, rejection and handling is required, are generally handled by specialist vision system integrators. Systems integrators will also provide the detailed documentation needed to support the validation and auditing of equipment (essential in the healthcare and pharmaceutical industries), manuals, commissioning, training and post-installation support.

Stand-alone end of life systems

Standalone End Of Life(EOL)  systems may be added to the manufacturing environment to provide in-line inspection when it is simply not possible to integrate a vision system into an existing line. Featuring an integral transport and reject mechanism, they will be equipped with the appropriate illumination, camera, control software and reject and fail-safe mechanisms for the particular application.

Compact Vision Systems

plus minus

Compact vision systems have the processor in a small compact housing with industrial I/O rather than in the camera itself. This enables multiple cameras to be connected to the controller over long cable lengths to share the processor and I/O, making them very cost-effective for multi-camera solutions.

Industrial Vision Systems

plus minus

Industrial vision systems can introduce automation into the production process at several different levels, from simply speeding up the inspection process to being an integral part of a statistical process control system that can identify when a manufacturing process is moving out of specification so that remedial action can be taken before any defective product is manufactured.

Online inspection systems acquire images of products and inspect them in real-time before providing a decision about product quality. This can be useful in identifying problems and enabling process improvements to stop substandard items from getting through to the next stage of production.

There are three main types of vision systems: smart cameras; compact vision systems and PC-based systems.

Other areas to consider when specifying vision systems are machine vision illumination products. Specifying machine vision lighting is a critical step when designing an industrial vision system as getting the illumination geometry and wavelength correct makes the vision system more reliable and simplifies the inspection task considerably.

Also, selecting the right lens for a machine vision camera can have a significant effect on the image quality and therefore success of the machine vision application. With a wide range of lens mounts, sensor resolutions and sensor sizes it’s important to choose an industrial lens with the correct specification for the application. The wide variety of available photoelectric sensor types with their varying function characteristics makes it possible to solve nearly every detection problem.

PC-based Vision Systems

plus minus

PC-based systems harness the ever-increasing power of mass-market mass computing for high-performance vision systems. These systems can support the most complex image processing capabilities with a versatility that ranges from single PC and single camera to multi-computer, multi-camera configurations.

Cable and connector products for machine vision are another important area to consider as in most imaging or machine vision applications, a robust interconnect between the industrial camera and the PC or image processor is required. There are not only many camera interfacing standards available but also environmental requirements for machine vision cable such as the need for robotic flex, fire safety and resistance to chemical exposure. Standard machine vision cables as well as custom and specialist cables are all available.

PC-based machine vision systems all require an interface between the camera and computer. In modern systems, this is based on several machine vision camera interface standards. Some interfacing standards use the consumer ports that reside inside a PC such as USB or FireWire while others require an additional camera interface card often called a frame grabber.

Board-level vision systems

plus minus

The availability of small, embedded processing boards based on either ARM or x86 instruction set architecture offers great potential for the development of embedded vision systems for industrial applications. Many of the leading image-processing libraries and toolkits can now be ported to these platforms meaning that the tools are available to produce a wider range of vision solutions in this format.

Developing Applications for embedded systems (courtesy of Multipix Imaging). Combining these processing capabilities with low-cost cameras, including board-level cameras, means that vision systems could be incorporated into a wide variety of products and processes with comparatively small cost overheads.

The board-level challenge

While embedded vision has been applied to transport, logistics and other non-industrial machine vision applications, their use in industrial applications is still at a comparatively early stage. Systems are not readily available ‘off the shelf’ in the same way as smart cameras or multi-point vision systems. To date, embedded vision systems tend to have been designed by vision and integration specialists for a specific OEM application, often in the field of medical devices, industrial automation or remote monitoring. Thus, at present their specialist development costs will offset the inherently low component costs. Although the boards offer very powerful processing capabilities, data transfer bandwidth is limited even with a direct connection to the board. This means that the boards need to be used to perform selected processing tasks that minimise data transfer. In addition, although image processing libraries are available for use on the embedded processors the exact performance of an algorithm can vary depending on the particular processor used.

Moving forward

In the same way that other variants of vision systems have matured in recent years, there is little doubt that there will be significant developments in board-level systems allowing the real cost benefits to be realised. It is also likely that there will be an increasing use of SoC (system on chip) processors such as the Xilinx Zynq series.

UKIVA members can offer further advice on Embedded Vision Technology.

Smart cameras and vision sensors

plus minus

Smart cameras

With their on board image capture and processing capabilities, smart cameras avoid the need to transmit large quantities of image data back to a remote PC for processing and analysis. The results of the inspection are made on board and sent to a PLC over industry standard connections such as Ethernet. The ability to pack more speed and processing power into smaller chip sizes has enabled more intelligence to be embedded into smart cameras. Not only that but use is also being made of multiple processing technologies such as DSP, CPU and FPGA for algorithm, communication and control optimisation. Smart cameras have benefited from the recent developments in CMOS sensor technology with the result that there is an enormous choice on the market with an impressive range of resolutions, size and weight. Smart cameras are available with different levels of embedded software ranging from simple code reading to the most sophisticated imaging toolkits. Camera configuration is carried out via a simple user interface – often a web browser or a user development interface. Smart cameras offer a comprehensive range of capabilities including:

  • Positioning – guide robot handlers or adjust vision tools for part measurement
  • Identification – for verification or traceability
  • Verification – verifying parts for correctness assembly or packaging
  • Measurements – for dimensional accuracy
  • Flaw detection – checking surfaces for defects

Smart cameras are single point inspection systems so where there are multiple inspection points in a process, it may be more effective to consider using a compact vision system.

3D smart cameras

Perhaps the most striking evidence of development in smart camera technology has been the emergence of 3D smart cameras. Up until comparatively recently, the computationally intensive requirements of 3D measurements to acquire images, create 3D point clouds and make measurements were only possible using a PC. However, the developments in processor technology mean that this is also now possible using processors housed in the camera itself, and these 3D smart cameras can be used to make the appropriate measurements in production line environments in the same way as 2D smart cameras.

Smart vision sensors

Smart vision sensors are low-cost imagers, often with integrated light sources, which can perform simple tasks such as identifying the orientation, shape and position of objects and features. They can also inspect for assembly errors, defects, damaged parts and missing features. Embedded vision tools can provide part locating, feature finding, counting and measuring capabilities. The built-in intelligence can allow these tools to be combined and used numerous times to solve simple or complex inspection tasks.

UKIVA members can offer further advice Embedded Vision Technology.

Choosing an embedded vision solution

plus minus

Determining what type of embedded vision system is right for a given application depends on the application itself. What needs to be accomplished and how will the resultant data be used? Other factors include the number of sensors needed, the operating environment including the amount of space available, the level of support available and, of course, the cost. One of the most important considerations is software. The capabilities of the software must match the application, programming and runtime needs.

Board-level applications

Board-level embedded vision systems deployed to date have generally been developed by vision specialists. Once this initial development has been completed, the economies of scale offered by the low-cost components can be realised. There is much potential for board-level systems, ranging from use in hand-held devices to being an integral part of the smart factory approach. It will be interesting to see whether system development will remain in the domain of the specialist or whether more easily set-up systems (as with smart cameras) will become commonplace.

Smart cameras and vision sensors

Smart cameras can be used in all of the traditional industrial vision applications such as high volume component inspection, robot guidance, 1D and 2D (DataMatrix) code reading and verification, optical character recognition etc. Small form factors and high end embedded software offer great flexibility to the machine builder or systems integrator who wants to use vision as an integral part of a process or machine. Smart cameras are single-point inspection systems and are the ideal choice where multiple independent points of inspection are needed. Each one can be set up independently to perform a specific task and modified if needed without affecting the other inspections. For less demanding single-point inspections, the low cost of ownership of smart vision sensors allows them to be used at multiple points of inspection. This gives better failure analysis data and allows corrective action to be taken more quickly and easily. 3D smart cameras provide the ability to process whole parts, making factory automation easier and less expensive by eliminating the multiple components and software engineering required for automated part scanning and detection. With discrete parts segmented into 3D cloud datasets, it is possible to perform volumetric measurements such as volume, centroid, orientation, etc. to provide information on dimensions, location, and orientation.

Multi-point vision systems

Multi-point inspection systems are best suited to applications where multiple cameras are required to carry out the same inspection. Comprehensive embedded software provides processing and measurement capabilities equivalent to smart camera systems. In addition, cameras with different sensor sizes and resolutions can be mixed and matched according to the particular inspection point.

UKIVA members can offer further advice on Embedded Vision Technology.

Multi-point camera systems

plus minus

Centralised image processing

Multi-point vision systems (also known as compact vision systems) provide the flexibility, integration, and ruggedness required for many machine vision applications. They consist of a compact unit that contains high-speed high processors and high-speed memory resources in a rugged enclosure. Multiple cameras can be attached while the unit itself can be readily integrated into factory environments alongside other automation controllers.

The multiple camera inputs allow the inspection of different views of the same part, or even different parts, simultaneously. This integrated functionality provides a cost-effective and less complex solution than comparable smart camera implementations. Multi-point vision systems are available with Gigabit Ethernet, CameraLink and USB3 data interfaces. This allows a choice based on the required image data transfer rate from the camera to the central unit for processing, the maximum possible distance between the cameras and the control unit without repeaters as well as cost.

Gigabit Ethernet ports on the controller may be internally connected through independent data lanes to alleviate the bandwidth bottlenecks often associated with multi-camera acquisition. CameraLink systems include appropriate built-in frame grabbers. Different levels of embedded vision software can be supplied with these systems, providing the flexibility to tackle a host of machine vision applications. The number of cameras that can be used simultaneously will depend on the specification of the system, but for GigE Vision versions, larger camera network configurations can be accommodated using external switches.

Complete integration

Multi-point vision systems can be equipped with several interfaces for system integration. These can include dedicated display and USB ports for setup and runtime control, Gigabit Ethernet and serial ports for factory communication, dedicated trigger inputs for inspection timing, dedicated strobe outputs for lighting control and optoisolated I/O for associated equipment interfacing.

Choice of cameras

With a choice of data interfaces there is a huge range of cameras available for use in multi-point vision systems, including line scan cameras. Cameras with different resolutions and sensor sizes can be mixed and matched as required for the specific application, and of course, other factors such as size, weight and cost will also be taken into consideration.

UKIVA members can offer further advice on Embedded Vision Technology.