Both approaches will result in an overall dark image with bright saturated target colors that stand out from the background and are easier to mask. To do this on a 206 or M1011, change the exposure setting to auto, expose the camera to bright lights so that it computes a short exposure, and then change the exposure setting to hold. The other approach is to calibrate the camera to use a custom exposure setting. The brightness setting acts similar to the exposure compensation setting on SLR cameras. One is to allow the camera to compute the exposure settings automatically, based on its sensors, and then adjust the camera’s brightness setting to a small number to lower the exposure time. There are two approaches to control camera exposure times. The desired image will look somewhat dark except for the colored shine. If the point is to identify the red object, it is useful to adjust the exposure to avoid diluting your principle color.
Once the red component is maximized, additional light can only increase the blue and green, and acts to dilute the measured color and lower the saturation. A saturated red object placed in front of the camera will return an RGB measurement high in red and low in the other two e.g. Lets look at an example to see how this occurs. The issue is that as overall brightness increases, color saturation will start to drop. The brightness or exposure of the image also has an impact on the colors being reported.
#Frc driver station axis m1013 code#
The SmartDashboard and robot code in C++ or Java will use the resolution and framerate stored in the camera. The LabVIEW code specifies the framerate and resolution as part of the stream request (this does not change the settings stored in the camera, it overrides that setting for the specific stream). Note: When requesting images using LabVIEW (either the Dashboard or Robot Code), the resolution and Frame Rate settings of the camera will be ignored. If both robot and target are stationary, processing time is typically less important. If the robot or target is moving, it is quite important to minimize image processing time since this will add to the delay between the target location and perceived location. Smaller images will be roughly four times faster than the next size up.
Image size also impacts the time to decode and to process. The small image size should be usable for processing to a distance of about 30 feet or a little over mid-field. This occurs at around 60 feet away, longer than the length of the field. Using the distance equations above, we can see that a medium size image should be fine up to the point where the field of view is around 640 inches, a little over 53 feet, which is nearly double the width of the FRC field. The tape used on the target is 4 inches wide, and for good processing, you will want that 4 inch feature to be at least two pixels wide. The large image has sixteen times as many pixels as the small image. The largest image size has four times as many pixels that are one-fourth the size of the middle size image. The M10 have additional sizes, but they aren’t built into WPILib. Image sizes shared by the supported cameras are 160x120, 320x240, and 640x480.