2011년 8월 11일 목요일

Software compensates for lighting nonuniformity in solar cell inspection

Software compensates for lighting nonuniformity in solar cell inspection

August 4, 2011
Measuring the area of a tack used to mechanically attach parts of the solar cell together indicates the strength of the attachment
Solar cell inspection systems often require numerous features to be measured simultaneously using single cameras. At Owens Design (Fremont, CA, USA), a solar cell inspection system has recently been developed to check part distance measurements and their orientation, features, and blemishes. During system design, a smart camera was used to perform seven separate visual inspections on six different parts.
During the development phase, the camera, software, and lighting were set up to inspect product samples. To enhance and inspect the edges of the product, off-axis illumination was used and images captured using an In-Sight smart camera from Cognex (Natick, MA, USA).
Unfortunately, close to deployment, a change in product materials meant that although the camera could perform six of the inspection tasks, surface features could not be easily measured because a lack of uniform illumination compromised the contrast of the captured images. Essentially, the camera lacked a means of flattening the visual field so the illumination appeared uniform across the entire area being inspected.
Other cameras were considered for the application, but they could not be used for a number of different reasons. First—and most important—the client had specified the Cognex In-Sight camera because of its low integration cost, factory support, and a desire to use a standard vision supplier across multiple applications.
Second, other smart cameras were either not powerful enough, were difficult to deploy, or lacked sufficient technical support. Last, while it would have been possible to adjust the lighting and perform a second inspection of the part, this would have doubled the number of inspection passes and required recalibrating the camera. Since these alternatives were unacceptable, it was decided to compensate for the lighting nonuniformity by using software.
Due to the lighting nonuniformity, the camera was unable to successfully measure the area of a tack used to mechanically attach parts of the solar cell together (see figure above). This task is important because measuring the area of the tack indicates the strength of the attachment. Since the tack has a different grayscale value than the rest of the part being viewed, a smart camera should be able to find and measure its area. Unfortunately, illumination variation across the tack area made it impossible to first set a correct grayscale threshold value. Without such a value, the area of the tack could not be automatically segmented from the rest of the image.
In the raw image captured by the camera, the illumination varies from brighter at the bottom to darker at the top. While the tack region is discernable, the grayscale difference between the area and its background is very small.
To solve the problem of lighting nonuniformity, a simple yet elegant software solution was developed (see figure at bottom). First, a featureless portion of the image, containing the lighting gradient was extracted from the original image (top, left). This sample (shown in red) was then scaled in the x direction to be as wide as the part (top, middle). The original image was then rotated 180º so the illumination was darker at the bottom and lighter at the top (middle images).
Finally, the expanded sample region (top right) was added to the reversed image (middle, right). Combining the reversed image where the lighting gradient runs darker to lighter from top to bottom with the expanded sample image then corrects the illumination gradient for the tack area (bottom, right image). Additional coding was used to apply a grayscale offset to restore the 8-bit image data.
Once this compensation was applied, detection of the tack became very robust. In addition, since this correction can be performed dynamically, it can correct for time-varying lighting or different parts or materials. Such nonuniformity lighting compensation resolved the solar cell inspection issue, making it possible to meet the client’s deployment schedule and provided an easy, low-cost solution that can be used in similar inspection applications.
-- By Hans Hansen, senior systems design engineer, Owens Design (Fremont, CA, USA)
Lighting nonuniformity across the image is compensated using a process of averaging the grayscale value of a region within the original image with an image rotated by 180 degrees

Camera Vendors Leverage CMOS Imagers


Camera Vendors Leverage CMOS Imagers

August 1, 2011
Vendors employ off-the-shelf and custom CMOS area-array imagers to pioneer new designs for wide-ranging imaging applications
Andrew Wilson, Editor
During the past decade, many articles have been written detailing the performance characteristics and benefits of CCD and CMOS imagers. Initially, the promise that CMOS imagers were less expensive to fabricate and were expected to facilitate a higher level of system integration while featuring quantum efficiency, noise levels, and dynamic ranges similar to their CCD counterparts was somewhat exaggerated. In recent years, CMOS sensors have shown image-quality improvements and manufacturers are leveraging their higher speed, lower power requirements, and greater integration potential in a range of novel cameras and camera systems.
Intent on developing devices for large consumer markets such as cellular telephones and mobile computers, many semiconductor companies entered the CMOS imaging market either by designing their own devices or acquiring specialized CMOS design houses. Today, the few companies that are still pursuing the mobile market are developing multimegapixel, back-illuminated devices with pixel sizes of 1.1 μm or less—despite the fact that no commercially available consumer lens can possible resolve such detail (see R. Fontaine, “A Review of the 1.4 μm Pixel Generation,” http://www.chipworks.com/Related_Product_Info/Chipworks_review_1.4um_pixels).
To address more specialized machine-vision and image-processing markets, other vendors have leveraged the benefits offered by CMOS fabrication techniques to meet the demands of low-power, high-resolution, intelligent or low-light-level/high-dynamic-range applications.

Embedded imaging

In medical imaging systems, the low power and compact size of imagers allows developers of products such as endoscopes to replace older, analog-based systems with digital camera systems. Many CMOS imager vendors are now offering products that allow digital endoscopes to be produced in a cost-effective manner. Companies such as Awaiba, AltaSens, and OmniVision have all developed imagers specifically targeted to such low-cost, lower-power applications.
One of the first companies to announce such a product, Awaiba, offers the NanEye—a 140 × 140-pixel CMOS imager measuring 540 × 500 μm that can produce 8-bit/pixel images at up to 40 frames/s. OmniVision also offers a CMOS imager, the OV6930, specifically targeted at the medical image-processing market. With a packaged footprint of 1.8 × 1.8 mm, OmniVision's OV6930 is a 1/10-in. array that operates at up to 30 frames/s at 400 × 400 HVGA or 60 frames/s at 400 × 200 pixels. The imager is already being used in two medical device modules from COMedia Ltd.
In a larger 1/3-in. format, the ProCamHD 2462 imaging system-on-chip (iSoC) sensor from AltaSens features a 1280 × 720-pixel, 60-frames/s imager with an integrated 12-bit ADC, allowing high-definition digital images to be output directly from the sensor. According to AltaSens, the device has been used by MGB Endoscopy in the design of its MD-V endoscopy camera. By employing sophisticated, miniaturized CMOS imagers, medical system developers can reduce the parts count—and thus size—associated with this type of camera system, simultaneously reducing the cost of their products.

High-speed applications

Embedded medical applications are just one of the areas now taking advantage of CMOS imager characteristics. Many industrial machine-vision and motion-control applications require very high-speed imagers with global shutters and high pixel resolution. Again, CMOS imager vendors such as Alexima, CMOSIS, and ON Semiconductor are responding to this demand with an array of products that have been integrated into high-speed cameras.
Currently, Alexima offers two CMOS imagers targeted at high-speed image capture. The AM41 is a 4-Mpixel, 500-frames/s global shutter CMOS sensor that, at a reduced resolution of 1920 × 1080 pixels, can achieve rates as high as 1000 frames/s.
Alexima's most recent image sensor, the Am1X5, is targeted at even higher frame rates. Featuring a 1k × 1k-pixel, 5000-frames/s digital CMOS sensor and a global shutter, the device has recently been incorporated into the Y4, a high-speed camera from IDT that is capable of running at 3000 frames/s with 1016 × 1016-pixel resolution.
To maintain the bandwidth available with these sensors, many camera manufacturers have incorporated the latest high-speed interfaces in their products. Miktrotron has incorporated the Camera Link HS standard into the EoSens 4CL HS AM41-based camera to allow transfer of images to a host computer at 2.1 Gbytes/s.
Like Mikrotron, Optronis has also incorporated the AM41 into its high-speed camera, the CL4000CXP (see Fig. 1). Rather than use the Camera Link HS interface, Optronis has chosen to transfer data from the camera to the PC using four CoaXPress data channels. With this data transfer speed, the 4-Mpixel CMOS camera can be clocked at 500 frames/s. Another approach taken by VDS Vosskühler (now part of Allied Vision Technologies) in its AM41-based CMC-4000 camera to achieve 400 frames/s at the full 4-Mpixel resolution is to use two readout channels that separately transmit left and right data regions from the imager, employing two 10-bit Camera Link outputs. Because of this, either a frame grabber with two 10-tap Camera Link inputs or two separate frame grabbers, each with a single 10-tap input, are required.
FIGURE 1. Optronis has incorporated the AM41 4-Mpixel CMOS imager from Alexima into its high-speed camera, the CL4000CXP. Data from the camera are transferred to the PC using four CoaXPress data channels.
CMOSIS has found a number of customers for its CMV2000 and CMV4000 image sensors. Both devices feature pipelined global shutter, correlated double sampling, and 16 channels of LVDS output; the CMV2000 features an image format of 2048 × 1088 pixels and can be clocked to produce 340 frames/s in 10-bit mode, and the 2048 × 2048-pixel CMV4000 can run at up to 180 frames/s.
To date, a number of Camera Link offerings incorporate these CMOSIS devices: They include Adimec's CMV4000-based Quartz Qs-4A40 and CMV2000-based Qs-2A80; Basler's CMV2000-based acA2000-340km/kc and CMV4000-based acA2040-180km/kc; and Point Grey's Gazelle camera series.
After acquiring the CMOS image sensor line from Cypress Semiconductor, ON Semiconductor has also found success for its line of high-resolution image sensors among camera vendors. One of the most impressive of these, the VITA 25K, offers 5120 × 5120-pixel resolution, pipelined global shutter, and 53 frames/s at full resolution. Higher frame rates can be achieved using windowed or sub-sampled readout modes.
At the 2010 Technical Exhibition on Image Technology and Equipment held in Japan, Edec Linsey System announced it had developed a camera based on the VITA 25K image sensor. Dubbed the VIS-1001-PM, the camera uses a Full Camera Link interface to deliver 20 frames/s at full 5120 × 5120-pixel resolution.

Dynamic range

While many camera vendors are using the latest CMOS imagers in embedded and high-speed applications, others are leveraging the high dynamic ranges that can be achieved using the devices.
Photonfocus has used its own A1312 imager in the MV1-D1312-240-CL-8, a Camera Link camera that features a 1248 × 1082-pixel imager that can operate at 200 frames/s at full resolution while achieving a dynamic range of up to 120 dB.
Other camera vendors such as Red Shirt Imaging and Imaging Development Systems (IDS) are employing devices from providers such as Imager Labs and New Imaging Technologies (NIT).
Both Red Shirt Imaging and IDS have produced cameras capable of resolving changes in light intensity from both very bright and relatively dim regions within the same field. Red Shirt Imaging has used a custom CMOS imager from Imager Labs to develop the NeuroCMOS-SM, a series of cameras for high- and medium-light-level measurements that can operate at speeds as fast as 10,000 frames/s with a 21-bit dynamic range.
IDS has opted to integrate a wide-dynamic-range (WDR) image sensor from NIT into both its 768 × 576-pixel UI-5120SE GigE and UI-1120SE USB 2.0 cameras (see Fig. 2). Using the WDR imager, both cameras achieve a dynamic range of 120 dB.
FIGURE 2. Imaging Development Systems (IDS) has integrated a wide-dynamic-range (WDR) image sensor from New Imaging Technologies into both its 768 × 576-pixel UI-5120SE GigE and UI-1120SE USB 2.0 cameras. Using the WDR imager, both cameras achieve a dynamic range of 120 dB.
Though there is still high demand for cameras based on CCD imagers, the speed, cost, and integration advantages of CMOS imagers enable camera differentiation while offering added functionality. Embedded and high-dynamic-range applications will easily benefit from advances in CMOS imager designs; however, the numerous camera interfaces such as USB 3.0, CoaXPress, 10GigE, and Camera Link HS—although fast—may still be limited in terms of their capability to support future generations of high-speed CMOS-based cameras.

Company Info

Adimec
Eindhoven, the Netherlands
http://www.adimec.com/
Alexima
Pasadena, CA, USA
http://www.alexima.com/
AltaSens
Westlake Village, CA, USA
http://www.altasens.com/
Awaiba
Madeira, Portugal
http://www.awaiba.com/
Basler
Ahrensburg, Germany
http://www.baslerweb.com/
CMOSIS
Antwerp, Belgium
http://www.cmosis.com/
COMedia
Hong Kong, China
http://www.comedia.com.hk/
Edec Linsey System
Toyohashi, Japan
http://www.edeclinsey.jp/
IDT
Tallahassee, FL, USA
http://www.idtpiv.com/
Imager Labs
Monrovia, CA, USA
http://www.imagerlabs.com/
Imaging Development Systems
Obersulm, Germany
http://www.ids-imaging.com/
MGB Endoscopy
Berlin, Germany
http://www.mgb-berlin.de/
Mikrotron
Unterschleissheim, Germany
http://www.mikrotron.de/
New Imaging Technologies
Verrières le Buisson, France
http://www.new-imaging-technologies.com/
OmniVision
Santa Clara, CA, USA
http://www.ovt.com/
ON Semiconductor
Phoenix, AZ, USA
http://www.onsemi.com/
Optronis
Kehl, Germany
http://www.optronis.com/
Photonfocus
Lachen, Switzerland
http://www.photonfocus.com/
Point Grey Research
Richmond, BC, Canada
http://www.ptgrey.com/
Red Shirt Imaging
Decatur, GA, USA
http://www.redshirtimaging.com/
VDS Vosskühler, part of Allied Vision Technologies
Osnabrück, Germany
http://www.vdsvossk.de/

Company Info

Cambridge Technology
Lexington, MA, USA
http://www.camtech.com/
Imperx
Boca Raton, FL, USA
http://www.imperx.com/
Innovative Signal Analysis
Richardson, TX, USA
http://www.signal-analysis.com/
Lumitron
Louisville, KY, USA
http://www.lumitron-ir.com/
Marlow Industries
Dallas, TX, USA
http://www.marlow.com/
Xilinx
San Jose, CA, USA
http://www.xilinx.com/

2011년 8월 10일 수요일

http://www.yjet.co.kr/

http://blog.daum.net/m9003s

실리콘계 태양전지 소재 소자 고급인력양상 사업단 개소

최근 지식경제부가 선정한 ‘에너지 산업의 기초 및 고급 인력양성사업’에 선정된 전북대가 27일 오후 2시 공대 8호관에서 이 사업 수행을 위한 ‘실리콘계 태양전지 소재 소자 고급인력양상 사업단’(단장 양오봉 교수)을 개소했다.

이날 개소식에는 서거석 총장을 비롯해 김동원 공대 학장, 에너지기술평가원 성창경 본부장, 전북도 이금환 전략산업국장, 한국태양광산업협회 이성호 부회장, (주)OCI, 비봉이엔지, 알티솔라 등의 산업체 관계자 등이 참석한 가운데 현판식 등의 행사가 진행됐다.

‘실리콘계 태양전지 소재 소자 고급인력양성 사업단’은 2011년부터 2015년까지 5년간 국비 23억을 포함해 총 35억 원을 지원받아 향후 5년간 실리콘계 태양전지 소재와 소자의 이론과 실무 융합교육과 관련 분야 R&D를 통한 SCI 논문특허를 게재해야 석·박사학위를 취득하는 과정을 진행한다
특히 세계최고의 연구소로 알려진 독일의 프라운호퍼 ISE 태양광연구소를 비롯해 독일 프랑크푸르트 대학, 일본 오사카 대학과 공동으로 교육과 연구를 수행할 예정이어서 세계 최고수준의 인력양성이 기대되고 있다.

또한 이 사업단에서 5년간 배출될 100여명의 석·박사급 고급인력은 OCI, 알티솔라, 비봉이엔지, 다쓰테크 등 우리나라 태양광 관련 기업에 취업도 가능하다. 이로써 태양광 관련 산업을 세계 5위로 부상시키는 견인차 역할을 하게 될 것으로 기대를 모으고 있다.

서거석 총장은 “우리대학은 신재생에너지분야 육성을 위한 정부 시책에 따라 지난 2005년부터 2014년까지 신재생에너지 분야 고급 인력 양성을 위해 정부로부터 매년 70억 원 이상, 총 700억 이상을 지원받아 집중 투자하고 있다”며 “특히 오늘 문을 여는 사업단은 태양전지 융·복합 고급트랙을 통해 우리나라 태양전지 분야를 이끌어 나갈 전문 인력을 배출하는 메카로 도약할 것”이라고 밝혔다.

한편, 이날 개소식 이후인 오후 2시 30분부터는 태양광 산업의 현재와 미래를 가늠해보는 심포지움도 진행돼 태양광 산업 발전을 위한 다양한 담론이 형성됐다.

Ferrite

아철산염이라고도 함.

여러 종류의 전자장치에 이용되는 자성이 있는 세라믹 같은 물질.





페라이트는 단단하고 부서지기 쉬우며 철을 포함하고 있고 보통 회색이나 검은색의 다결정, 즉 수많은 작은 결정으로 이루어져 있다. 화학적 조성을 보면 산화철과 하나 이상의 다른 금속으로 되어 있다. 페라이트는 산화철(Ⅲ)(녹)과 마그네슘·알루미늄·바륨·망간·구리·니켈·코발트·철 등의 여러 가지 다른 금속이 반응하여 형성된다.


페라이트의 화학식은 보통 M(FexOy)로 표현되는데, M은 앞에서 언급된 원소들 중 그가 결합하는 금속을 나타낸다. 예를 들어 아철산니켈은 NiFe2O4이며, 아철산망간은 MnFe2O4로 둘 다 첨정석 광물이다. 희토류 원소인 이트륨을 포함하고 있는 YIG(Yttrium Iron Garnet:Y3Fe5O12)로 알려진 석류석 광물은 마이크로파 회로에 쓰인다. 가장 잘 알려진 페라이트는 성서시대부터 알려진 것으로 화학식이 Fe(Fe2O4)인 자철석(자화석 또는 아철산철(Ⅱ))이다. 페라이트는 준강자성이라고 하는 자성을 띠는데 이는 철·코발트·니켈 같은 물질의 강자성과는 구별된다(→ 준강자성). 페라이트는 이들 구성 원자의 자기모멘트가 서로 다른 두 방향 또는 세 방향으로 배열된다. 자기장은 부분적으로 상쇄되어 페라이트는 전체적으로 강자성 물질보다는 약한 자기장을 갖게 된다. 원자 방향성에 부분적으로 존재하는 이 비대칭성의 원인은 2개 이상의 다른 종류의 자기 이온이 존재하거나 특징적인 결정 구조 때문이다. 준강자성이라는 용어는 프랑스의 물리학자 루이스 닐이 만들었는데 그는 최초로 원자 수준에서 체계적으로 페라이트를 연구한 사람이었다. 준강자성은 몇 가지 종류가 있다. 동일 직선상 준강자성은 장(場)이 반대방향으로 일렬로 배열된 것이고, 3각형 준강자성은 장이 서로 다른 각도로 놓여진 것이다. 페라이트는 첨정석 구조, 석류석 구조, 회티탄석 구조, 육방 구조 등 다양한 결정 구조를 갖는다(→ 지구 자기장). 페라이트의 가장 중요한 성질은 자기투과성이 크고 전기저항이 크다는 점이다. 자기장에 대한 투과성이 크기 때문에 페라이트가 안테나 같은 장치에 쓰이며, 전기저항이 크기 때문에 와전류를 감소시키는 변압기의 자심(磁心)으로 이용될 수 있다. 정4각형 고리 페라이트로 알려진 형태의 페라이트는 전류에 의해 두 방향 중 한 방향으로 자화될 수 있다. 이 성질로 인해 페라이트가 디지털 컴퓨터 기억 장치로 유용하게 쓰이는데, 이는 매우 작은 페라이트 고리가 2진수 비트의 정보를 저장할 수 있기 때문이다. 다른 형태의 컴퓨터 기억장치는, 거품기억소자라고 하는 아주 작은 자기영역에서 각각 조종될 수 있는 특정 단결정의 페라이트로 만들 수도 있다(→ 자기거품기억소자). 많은 페라이트는 한 방향으로만 마이크로파 에너지를 흡수하므로 마이크로파의 도파관으로 이용된다.