With these high-level user needs in mind, we could begin to figure out what hardware and software was needed to make it happen.
We pulled together what we had available in our lab and settled on some of the following hardware:
We chose this camera for a few reasons - the main one being that we had one available in the lab. But there’s a reason we like this camera.
ZWO sells camera systems for astrophotography. They provide extensive technical specifications on the camera’s performance, making it much easier to ensure we are getting what we need. This particular model uses a Sony IMX462 sensor - which has a back-thinned CMOS sensor that drastically improves light sensitivity - that designed with NIR sensitivity in mind. Ideally, we would prefer a monochrome sensor to a color sensor to maximize light detection - but we had this one available for use and knew we could make it work.
The sensor itself is packaged with ZWO’s custom DSP and firmware - which is specifically designed around low-light imaging applications. This is critical for fluorescence imaging, where we are typically starved for photons.
We were able to fit an 800nm long pass filter behind the camera lens - which becomes critical to preserving filter performance with fluorescence imaging. Without it, we would struggle to get any usable contrast in our NIRF images.
When picking a lens for fluorescence imaging, you typically want to minimize the f/# and field of view for the camera sensor size being used. Minimizing the f/# of your imaging lens will maximize light collection, but will yield a shallow depth of field. Your device will need to balance light collection with depth of field. Variable aperture lenses let you test this out to strike the right balance for your product design.
Additionally, smaller fields of view make it easier to optimize illumination uniformity - a critical determinant for fluorescence imaging performance. Typically, longer focal length lenses for a given f/# will narrow your field of view. A large field of view makes illumination uniformity challenging to maintain, while a narrow field of view will limit the size of features to can completely image.
In a perfect world, we might source a telecentric lens with a low f/# and 100mm field of view for maximal fluorescence imaging performance. But our timeline, mechanical envelope, and budget did not let us be too picky about our imaging optics here. Ultimately this lens was going to get us most of what we needed.
The software development ecosystem and power demands of Raspberry Pi’s are hard to compete with on tight timelines and budgets. Other options are available, but this system was going to be easier and quicker to develop around for under $250.
Conveniently, the Python bindings for the camera drivers we use for the ZWO camera support ARM processors like the RPi, which lets us get an operational prototype running quickly.
Further, the Pi has integrated GPIO controls and libraries which we will use to toggle the illumination with TTL communication.
And one of the key selling points for this architecture was the natively supported touch screens that Raspberry Pi offers. This would let us develop a simple control GUI around without needing an external computer to run. This saves on cable management and makes the user experience more robust.
We use a lot of Thorlabs equipment for system design. The nice thing about their LEDs are the flexibility in colors, optical powers, and drive electronics that are crosscompatible with their hardware ecosystem. Choosing Thorlabs hardware makes it simpler to extend the system to more colors in the future.
We like the T-cube LED drivers specifically because they are TTL-controlable, which integrates seamlessly with the Raspberry Pi GPIO without a bunch of extra drivers and debugging. Also, the manual power control is useful for getting the system dialed in quickly for field use.
We were not completely sure how much illumination power we needed for this system. We opted to design around an 800mW 727nm LED.
In hindsight, 800mW was overkill. But that power overhead gave us the piece of mind that we would be able to capture decent quality NIRF images.
We used an adjustable collimating asphere to direct the LED illumination output, which offers some flexibility in physically building the system. Combined with a K-cube dichroic mirror mount, we could design a compact epi-illumination to integrate with the camera without a ton of issues.
We do a lot of 3D printing. And when we are limited in the hardware we can use, 3D printing becomes a critical tool for prototyping things like this. We use CAD heavily to build custom components, which lets us get creative with fixturing and packaging off-the-shelf components. Further, 3D printing and CAD saves a ton of time in getting an operational prototype working.
The enclosure needed to be compact for transport, but also hold our optomechanical components securely for system operation. Ultimately, we settled on a design that held all of the critical components needed and only needed two externally-run power cables to power the entire system.
What is really nice about this setup is that it is extremely configurable and adaptable to future development. It would not produce reference-grade quantitative images, but it did show NIRF images to give some credibility to prospective customers when demoing our products. We plan to extend this in the future to include camera capture settings and a more slick UI with fancier javascript, but we could show it as is without too much worry.
Clear requirements are the easy part. The real work happens when you start making decisions and compromises:
Battery power vs. illumination intensity: Finding high-output LEDs that could run on battery power proved challenging. We accepted some limitations in signal strength and wavelength specifications to maintain portability and extensibility to other fluorophores. In the end, the camera and LED we used was more than suitable for our needs.
Lens optimization: Our existing machine vision lenses weren't NIR-optimized, which could limit system sensitivity. The more lenses and colors to see in the system, the more broadband lens optimization matters. Luckily, it was not an issue for us under monochrome imaging with enough illumination power overhead. But lens selection is worth revisiting in future iterations.
Software deployment: The Linux-based capture engine was simple to develop and easy to operate —perfect for demos. But we knew upfront it wouldn't be suitable for reference measurements without significant additional work to support full bit depth image capture and graphical adjustments for capture settings. It was fine for proof of concept, not ready for the QC station. We were willing to make that compromise here.
Mechanical assembly: The initial design was meant to be a standalone, friction-fit assembly for easy flat-packing. In practice, we needed mechanical fasteners to preserve function and stability. This changes some of the original design intent for the enclosure mechanical features, which we noted for future improvements.
Each one of these parts of the system could be blog posts of their own - and some will! We will focus on some of the high points in this series. But next up, we will dive deep into our approach to choosing the right camera for NIRF imaging.
Let us know if you want us to dive deeper into a particular topic! Drop us a note at feedback@quelimaging.com.