As part of our work I get to do quite a bit of photography and videography. And an interesting shot is often one taken from a tight spot.
The EOS-M from Canon was a very handy camera – the size of the ‘heroic’ small video cameras with a APC-Sensor.
It helped that they were being sold at 40-50% off original price, because of a firmware issue that turned many people off of buying them.
As Warren Buffet might say – buy value, not perception. This little camera was indeed perceived as a slowpoke because of its comically slow autofocus, but having spent enough time writing algorithms for servo control, I knew that it was probably just a firmware update away from being faster. It turned out this was the case when 2.0.2 was released. Suddenly the camera was twice as fast at focusing, and perceptually felt like someone woke it up with BTTF3 Wake-Up juice? Amazing? Just engineering:
First, a camera with autofocus actually has to move parts of the lens using a motor, then check how blurry the image is, then repeat for hundreds of times in a second. It’s really a robot eyeball, so we’ll call it an “EyeBot”. Like all robots, its motion is determined by its internal control system – which really means the pathway from sensing to deciding to doing and back again.
Remember the old saying: “Fast, Cheap, or Good: Pick any two.” ?
For an Eyebot Control System, the three items are: “Fast, Power-efficient, or Accurate: pick any two-ish.”
Accurate: of course we want the autofocus to be accurate, that’s the whole point, and really, the motor and gearing system that do this have to be Wildly Accurate – moving lens elements by microns. We need the accuracy, otherwise we would sell no cameras at all with an “Autoblur system”
Fast– the amount of power used by an EyeBot is a lot like the fuel consumption of a car: A big motor will get you going fast when you stomp on the gas, but you will see your fuel consumption go as much as ten times above highway cruising fuel consumption. An autofocus system, to track moving subjects needs to get the lens glass moving back and forth quickly, so it needs to be accelerating and decelerating fast – even faster than a race car.
Power Efficient – This means the foot is on the gas a lot, and the gas tank can empty pretty quickly to get the fast response we want. Although the weight and distance travelled are tiny, the acceleration is high, so the power used is quite high as well compared to battery size. Designers have to keep in mind that many consumer grade cameras are left in ‘continuous focus’ mode, so the lens is often refocussing, using battery power. It’s often the only moving part on the camera, and a big draw of battery power (though nowhere near as much as the screen).
Pick any Two-ish? Sounds like there is a loop-hole! How do we get all three?
Well, the Accurate, Efficient, Fast equation can be tuned quite a bit. The theoretical best performance is determined by the power of the motor and accuracy of the drive system, but often these are not maxed out, just like you are never full on your car’s gas and brakes to drive around town. In the case of the EOS-M there was likely room to simply feed more power into the autofocus system and maybe even run the focus algorithm a bit faster. In other words, increase the metabolism of the camera. This would of course drain the battery faster, but when the camera is not selling because of autofocus issues, that’s an OK trade to make.
‘Break the Rules’.
If you redesign the autofocus algorithm software itself maybe you can get a free lunch. This happens all the time in control system design – a smarter algorithm makes all the difference in hardware usefulness. To go back to the car analogy – hitting an autofocus point fast and accurately is like being in a traffic jam where you always want to stay exactly 55mm behind the bumper of the car infront of you. Although you are never going that fast, you have to sense her movement, and match her acceleration perfectly. Now here is the neat part – if you are always looking at her bumper, you only need to accelerate as fast as she does, because you have no lag between whenshe moves and when you respond. But if you are like our EyeBot, you are only glancing at her bumper every once in a while, when someone starts pressing the shutter, and then you have to get to 55mm behind her bumper as fast as possible She may be 30 feet ahead, or a mile ahead, or 226mm back. Now you have to make up for this distance quickly and that means a big motor and incredible driving skills to slam on the gas pedal, right up until the last possible moment, then slam on the brakes to come to a screeching halt at exactly 55mm behind her bumper again.
So, if you are a pro driver, and know your car really well, you can minimize this tailgate-time, or in EyeBot terms autofocus time. Having a big motor or glancing at her bumper more often would help, but if you aren’t able to switch out your motor or glance more often, you need Driver Training to improve your performance. You need to know the the car’s characteristics and dimensions, and you need to know precisely when to take your lead foot off the gas and slam on the brakes. You need to know the road surface and your tires, too!
A good control system, well written, is like a professional driver. You take your personal car, maybe a Honda Accord, and try to tailgate someone at 55mm, and chances are the pro driver will do it better.
Firmware updates are like that – same car, new driver. And in a case where someone was reducing performance to extend battery life, you now have the opportunity to pull a Scotty – divert all power to engines and get that extra boost in performance.
In summary – a good electronics company will update firmware frequently – and never discount the change it can make to the item you bought – there is a lot of driver training they can retroactively put into your camera.
If you want to check out how tuning a control algorith can affect performance, try my little interactive PID control loop