Team:ETHZ Basel/InformationProcessing
From 2010.igem.org
(→[https://2010.igem.org/Team:ETHZ_Basel/InformationProcessing/Microscope|Imaging]) |
(→Controller) |
||
Line 31: | Line 31: | ||
Furthermore the controller detects if a cell is swimming out of the field of vision of the microscope and automatically adjusts the position of the x/y-stage. | Furthermore the controller detects if a cell is swimming out of the field of vision of the microscope and automatically adjusts the position of the x/y-stage. | ||
- | |||
== [https://2010.igem.org/Team:ETHZ_Basel/InformationProcessing/Visualization| Visualization] == | == [https://2010.igem.org/Team:ETHZ_Basel/InformationProcessing/Visualization| Visualization] == | ||
Finally, the microscope image is post-processed to show the position of all cells, the selected cell and its current and reference direction, and visualized on the computer screen or with a beamer. | Finally, the microscope image is post-processed to show the position of all cells, the selected cell and its current and reference direction, and visualized on the computer screen or with a beamer. |
Revision as of 19:48, 22 October 2010
Information Processing Overview
The interface between the two sub-networks, the in-vivo network and the in-silico network, is defined as the current microscope image (in-vivo -> in-silico) and the red and far-red light signals (in-silico -> in-vivo).
By interconnecting both sub-networks, we thus can close the loop and obtain the overall network, which allows us to increase the information processing capabilities significantly compared to traditional synthetic networks completely realized in-vivo.
This section is meant to describe in detail the in-silico part of the overall network. For the in-vivo part, please refer to the Biology & Wet Laboratory section.
Imaging
The cells are placed in a 50 μm (?) high flow channel restricting their movement to the x/y-plane, thus preventing them from swimming out of focus. They are imaged in bright field by an automatized microscope with 40x magnification approximately every 0.3s. The image is send via a local network or the Internet to the controller workstation, which forwards them to Matlab/Simulink.
Cell detection and Cell tracking
In the controller workstation, the images are pre-processed by the Lemming Toolbox and the cells are detected and tracked in real-time, by means of fast image processing algorithms developed by our team. From the change of position between the microscope frames, the current direction of E.lemming is estimated.
User experience
The Toolbox is connected to either a joystick or a keyboard with which the user can choose the cell he/she wants to control and interactively define the reference direction for the E.lemming in real time.
Controller
For controlling E. lemming, our modeling group implemented five different control algorithms, based on the same template. The actual direction of E.lemming, together with the desired direction set by the user and the time-point of the simulation form the inputs of the algorithms, while boolean values for red light and far-red light represent the outputs. Based on original combinations of error minimization, hysteresis, noise suppression and predictions, our algorithms decide when to send red or far-red light. This decision is then send back through the network to the microscope computer, which activates or deactivates the respective diodes, thus closing the loop between the in-silico and in-vivo part of the network.
Furthermore the controller detects if a cell is swimming out of the field of vision of the microscope and automatically adjusts the position of the x/y-stage.
Visualization
Finally, the microscope image is post-processed to show the position of all cells, the selected cell and its current and reference direction, and visualized on the computer screen or with a beamer.