Notes on the OM-1 (digital) camera

After 16 months of use and nearly 500 000 photographs taken

cameras
Author

Pedro J. Aphalo

Published

2023-04-12

Modified

2023-07-29

Keywords

OM-1 digital, BSI sensor

Introduction

Quite a lot has been already written in the internet about the OM System’s OM-1 camera, but I write down here some notes comparing it to the E-M1 Mk II and highlighting some use cases. I share this notes in case someone else finds them useful, although they are mostly intended for myself. Among the things frequently mentioned is a lack of improvement in image quality between old and new image sensors (e.g., OM system OM-1 vs OM-5 – The 10 Main Differences). This is one example of where my own experience differs drastically from the quick and dirty tests used by some blog writers. There are exceptions to such statements, of course, the most notable I have come across being the thorough and very informative videos at the The Narrowband Channel with the video OM1 Sensor, Everything You Need to Know being most relevant to the current post. For tips on using the OM-1, the videos at the channel ThomasEisl.Photography are easy to follow, well researched and consistent with my own experience.

Image noise and high resolution modes

Image noise is difficult to objectively quantify in ways that are relevant to its “pictorial effect” or human observers’ reaction to it. Furthermore, the noise as seen in the final rendition of a photograph depends heavily on noise reduction during image processing. How effectively a noise reduction algorithm is depends on the characteristics of the noise and of the subject matter depicted by the image. I use mostly Capture One 23 and occasionally OM Workspace. My impressions below, are based on my experience with Capture One 23.

It is important to distinguish random noise (or thermal noise that affects each pixel at random in both time and space) and patterns like banding that affect pixels systematically in space and/or time. Noise in film is mostly random occasionally affected by some clustering, while electronic sensors tend to suffer from unevenly distributed noise, either the result of spatially uneven sensor temperature or systematic variation among pixels, due to inherent light sensitivity or differences in the wiring within the sensor of pixels at different locations in the array or differences in the analogue to digital conversion circuitry of different pixels.

I think different features of image noise annoy different people differently. I find specially disturbing any recognizable pattern in the noise. In this respect the OM-1 performs much much better than the E-M1 (Mk I) or E-M1 Mk II. As a result, the upper ISO limits I am happy to use with the two cameras are 800 to 6400, respectively. This makes Auto ISO my most frequent setting in the OM-1, while I almost never used it in the E-M1 Mk II. In practice these limits allow significant cropping of images and printing to A3 based on my own standards. These self-imposed limits are without use of any AI-based noise reduction in post-processing, with which even higher ISO would become usable in some cases.

That noise is spatially uniform and mostly random in the OM-1 not only makes it less disturbing in the image but also easier to remove without removing features from the depicted object. In Capture One this means that one can use much higher settings for clarity and structure without making visible artefacts such as banding. It also means that any procedure that averages multiple readings from each pixel will decrease noise very effectively. High resolution 80MP images using sensor shift are noiseless or nearly noiseless even at high ISO settings, as the merging of 8 frames cancels most of the random noise. An alternative approach that is also very effective is to take multiple photograph using the same exposure settings and do an HDR merge of them in Capture One, as Capture One will in this case align and average the images. (Of course, averaging is also possible with other software.)

An intriguing predictable consequence of noise being predominantly random, is that it should affect short exposures the most. This could explain some of the contrasting assessments of the OM-1 sensor and is something worthwhile testing as it could be taken advantage of in everyday use of the camera.

Live ND in the OM-1 is based on image averaging, so it can also reduce noise. The usable shutter speeds are, however, limited. I need to test how effective Live ND Shooting is at cancelling out noise in camera.

It is of note that the high resolution mode by sensor shift is usable with any lens, even old manual lenses attached through purely mechanical adapters. Even 50MP handheld images are possible as focal length can be very precisely entered and saved allowing very effective image stabilization. Also Live ND Shooting mode can be used with any lens, including adapted ones.

Digitizing film negatives and positive transparencies using the 80 MPix high resolution mode works extremely well.

Sensor dynamic range

Do OM-1 ORF files converted using default settings have a wider dynamic range? My impression is that they do not, and there is no much reason for camera designers to alter this. The relevant question is if the ORF files from the OM-1 can withstand a wider range of adjustment when they are edited. This is currently only a subjective impression, but I see little change from the E-M1 Mk II in how much recovery is possible in the shades, but I see an improvement in the highlights. Of course whether the extra “room” is in highlights or shadows depends on how one sets the exposure.

Focusing in challenging situations

Birds’ focus tracking works most of the time well. The AI systems locates the area of the frame where the bird is, but focusing itself does not rely on AI, it just focuses on whatever high contrast object is in this area. Sometimes, this is a branch in front of a bird. In such cases it helps to manually pre-focus on the bird.

While using subject recognition in a crowded situation, such as a bird with twigs in front, with subject recognition engaged, while pressing the shutter half way down, manual focus until the object to be recognized is partly in focus. After this the recognition algorithm usually finds even small birds as long as the occluding stuff remains out of focus. This works because if manual focus is enabled during automatic focusing (a setting in the AF menu), manual focus works not only as a follow up to single focus but also allows manual adjustment of focus during continuous focus with tracking with AI subject detection.

When there are many birds in the same image and they move so that recognition fails on the current target, a different bird can become the new target, I think, even with tracking enabled. This rarely happens with birds in flight, but occasionally with birds on the ground and at small size in the frame.

Setting a small focus area does not limit the tracking of birds in the whole frame but it limits where the target is initially locked to: this is extremely useful when one want to focus on a specific bird in a flock in flight or on the ground. (I am not sure, but this may be firmware version dependent.)

To quickly change the size of the focusing area without taking the eye from the viewfinder, press down the joystick and turn the front wheel.

To move the focus area use the joystick or the cursor pad.

If you have a button assigned to switching on-off the subject recognition, if you keep it pressed or do a long-press, you can change the subject type with the front wheel.

Focus tracking, obviously is not only useful for moving subjects. I usually find it faster to use re-framing after focusing than moving the focusing area with the joystick. For close-up focusing subject tracking can compensate for camera movement towards or away from the subject, even intentional for adjusting framing or magnification.

Autofocus in pouring rain to my amazement works. The E-M1 Mk II struggled with auto focus in heavy rain, even for steady subjects (I think the original E-M1 handled this better than the Mk II). The OM-1 seems to be minimally disturbed. Tracking birds in flight with the 300 mm F:4.0 objective (600 mm eq.) in heavy rain works reasonably well with about 1 in 3 or 4 frames with tack sharp focus on the eye.

Using the smallest focus area available makes it possible to focus on extremely small features in a subject. Using this approach with long focal lengths or for macro requires some practice, but persistence, but it allows one to focus, for example, with the camera handheld, on the head of an ant or on a distant bird.

Focus bracketing

With faster sequential shooting capability, focus bracketing is a lot faster in the OM-1 than in the E-M1 Mk II. With the camera on a tripod or copy stand it works perfectly with both slow and fast shutter speeds.

I have not yet used much in-camera focus stacking with close up or macro subjects. In tests it worked well, and has the advantage of getting quickly feedback on whether alignment succeeded or not.

I have not yet used much hand-held focus bracketing followed by off-camera focus-stacking. With Helicon Focus not always (only when shooting handheld at high magnification) alignment may fail at least for deep stacks. In situations like this a macro lens with image stabilization as the new 90 mm 1:3.5 performs better than the 60 mm 1:2.8 macro.

Stabilization

Stabilization has improved once again, but it is now so effective for lenses like the 300 mm F:4.0 that shutter speed is most of the time constrained by subject movement. The same seems true for the 90 mm F:3.5 Macro, even at magnification of more than one.

The same applies to subjects like trees shaking in the wind that will obviously not be rendered without movement as different parts of the tree will not move together. Similarly if there is no steady reference with high-contrast edges at different angles stabilization of camera movement is less effective, than for example with a building or book case in the frame.

There are limitations imposed by the nature of subjects, but overall it is the strongest feature of the system: I regularly take photographs handheld with a 300 mm + 2x teleconverter, or 1200 mm-equivalent focal length, even occasionally birds in flight. Still, the limiting factor for shutter speed is subject movement, not camera movement.

Good stabilization also helps a lot in tele-macro photography, for example of insects. One interesting aspect is that subject tracking works to some extent with un-supported subject types. Hopefully, in the future we will get insect recognition and tracking as a new option. Meanwhile, bird tracking mode works effectively for some insects. Subject recognition and tracking could be also useful for “static” subjects like flowers, shaking in the wind. Of course, tracking does work without subject recognition if one carefully targets the subject with a small focus area enabled.

When using adapted objectives, an additional improvement is how settings can be stored with precise focal length (to 0.1 mm). The lens name, focal length and maximum aperture are saved as metadata. This allows effective image stabilization with almost any lens. There is one limitation, when working at high resolution or focusing at close range: the camera will not receive information about focusing distance from an adapted objective. As this information is needed to estimate the stabilization requirement, stabilization becomes less effective. This limitation can to an extent be overcome by setting a longer focal length that the real focal length of the lens when working at close range.

The viewfinder

The higher resolution EVF of the OM-1 helps with manual focusing. However, the higher refresh rate of its EVF is what surprised me most: it makes tracking birds in flight so much easier compared to the E-M1 Mk II. Here I mean it makes a lot easier to keep the flying bird framed and anticipating changes in flight path.

The improved easy of manual focusing is of course hugely useful when using adapted manual lenses.

The new viewfinder makes also much easier to assess depth of field.

Sequential shooting

Fast sequential shooting is much more useful than what I had expected. At least, when photographing birds it is true that one ends with a heavy photo culling task, but one gets regularly one image out of many where the position of the bird and the light reflection on its eyes are “perfect”, which, at least for me has been almost impossible to achieve more than by accident with earlier cameras.

Capture One HDR merge mode can be used to average a series of equally exposed images as a way of decreasing image noise effectively. Using the OM-1 sequential shooting mode with its excellent image stabilization one can get a series of frames within a fraction of a second. Even birds standing on the ground or sitting on a perch, frequently remain immobile long enough to get four, five or even 6 frames good for HDR merge within 1/5 s. Capture One aligns the images so a small framing error between images does not cause problems.

With bracketed exposure, Capture One does seem still to use to some extent, possibly weighted, averaging.

A similar effect can be achieved using the hand-held high resolution mode as the merging of multiple images not only increases the resolution but also reduces noise.

Already with earlier Olympus mirrorless cameras I noticed that many birds do notice the sound of the shutter, so using an electronic shutter for bird photography is not only advantageous because of the high speed sequential shooting but because of being silent. Thus, that the new sensor allows faster sensor readout reduces artefacts is a significant advantage compared to earlier E-M1 cameras.

Bulb, Live Time and Live Comp

The Bulb setting for shutter speed has been available already in cameras with mechanical shutters for a very very long time. The Bulb setting keeps the shutter open as long as the shutter release is kept pressed. To avoid movement one would use a flexible release “cord”, either a cable mechanical cable release or a pneumatic release based a tube and a rubber bulb that pushed air. Hence the name still in use.

Some mechanical camera shutters and the earliest electronically controlled shutters had a Time that only differed in that one had to push the shutter release once to open the shutter and a second time to close it.

So, what is different in the OM-1 and some earlier cameras from Olympus is the “Live View” in these modes. The EVF or more conveniently the rear screen, or if the camera is tethered a phone, tablet or computer screen can show a live view of the image as it builds up, which removes the usually difficult guessing of the exposure time to use. This feature opens the door also to other approaches, such as light painting seeing live the effect.

The Live Comp mode is unique in that it involves merging of images in a special manner: the shadows are taken from the base exposure and only highlights from later exposures are added to the base exposure. This ensures that shadows remain dark in spite of the composing of the highlights.

Of these modes, I have used rather extensively the Live Time for UV-induced autofluorescence of plants and lichens. I normally use a UV-A flashlight that does not illuminate the whole area framed in the photograph and need to light-paint the image. This also allows to selectively illuminate different parts of the scene that is photographed. Fluorescence is weak, so exposure times from 30 s to a few minutes are common.

The sensor

Most of what I described above depends on the highly improved image sensor. There is still more to the sensor: it is a back-side illuminated (BSI) sensor. This has not only implications for the sensitivity to light but also for its optical properties. According to a video in The Narrowband Channel this makes a huge difference with vignetting, decreasing it markedly for wide angle objectives. MFT objectives contain memory and other electronics. The memory is used to store information describing the objective and required corrections, which allows MFT cameras to compensate for the vignetting, while, of course, adapted lenses lack such information. FT and MFT objectives were designed from the ground up to work well with front-side illuminated sensors by limiting the angle of incidence of the light on the sensor. The angle of incidence of light has a smaller effect on film than on sensors. Additionally, sensors have a filter stack and behave optically differently than film. A BSI sensor is optically more similar to film than earlier sensors, so I would expect adapted wide angle lenses designed for film cameras to benefit the most in this respect.

I intend to test more carefully how much the BSI sensor improves the performance of adapted objectives with respect to vignetting and possibly also in relation to colour fringing. A quick test with the Zuiko 100mm f:2.8 film camera objective, showed it to perform very well, but this very nice objective, also frequently performed well with the E-M1 and E-M1 Mk II cameras.

Note

With modern MFT objectives I have seen a surprising problem in Capture One 23: colour fringing correction based on the lens profiles built into the program miss-corrects colour fringing! The default correction makes fringing much worse than without any correction. Capture One applies a good correction only when analysing the images, rather than by default. This takes place at least with some M.Zuiko objectives, I have not yet checked this with all objectives I have access to.

Tethering and communication

Tethering through USB works reliably even with relatively long cables, it does not require USB-C port or anything faster than USB 3.0 on the host computer, even when using bracketing or the 80 MPix high resolution mode and transferring the raw files in real time to a PC. Charging/powering simultaneously with tethering does require a USB-C port with power delivery (PD) capable of supplying 9V at a minimum of 3 A.

Tethering through WiFi to a LAN connected PC. I haven’t yet used this mode.

Remote control through Bluetooth with a tablet or phone as well as the app remain similar to that with E-M1 cameras. Or rather, the improved features of the current App version also work with earlier camera models. (Bluetooth 5.0 seems to provide a very power-efficient connection, both on the phone and on the camera side.) There are tow modes: remote control and live-view. Both work extremely well. The remote control mode uses red on the screen, and the brightness can be dimmed from within the App. This is ideal for working in very low levels of light, like night-time photography or astrophotography, as it avoids the contraction of the eye’s pupil. With live view mode, one not only can use on the phone or tablet as one would the rear screen of the camera, even with touch for selection of focus points or shutter trigger. One can also control a good number of camera settings. For immediate on-line publication of photographs to the internet it is possible to automatically download the images to the phone or tablet as they are acquired.

Geotagging in camera is done using the same App as for remote control. I use geotagging when photographing outdoors with the OM-1 camera, almost always in real-time. In-camera geotagging using a GPS log recorded on the phone while not connected also works smoothly.

I find it extremely convenient that all what needs to be done once camera and phone have been paired once, is to enable GPS logging in the phone App. The phone and camera connect automatically when the camera is switched on within range. Connection is established within a few seconds and photographs are then geotagged in real-time in the camera. With an iPhone 12, connection is reliable and neither the iPhone battery nor the camera battery discharge noticeably faster than with geotagging disabled. The status of the connection is shown by a small icon in the EVF.

The phone App records the GPS track always when enabled irrespective of whether real-time geotagging in camera is enabled or the Bluetooth connection to the camera active. Thus, if the connection fails, the GPS information is not lost. Geotagging the photographs after they have been taken by uploading the log file from the phone to the camera works very well and only requires to send the log to the camera. The memory cards can be removed from the camera and then put back to do the geotagging at the end of a long shooting session if needed.

Touch screen

The touch screen when enabled can be used to directly select the focus point and trigger the shutter. This feature can be enabled and disabled quickly by touching an icon on the screen. As the rear screen is automatically disabled when the EVF is shaded, there is no need to explicitly disable before using the EVF.

I have used this mode from time to time, specially when taking macro photographs, or close to the ground. In those situations when one can easily see the rear screen this works nicely and fast.

I haven’t tried this yet, but I learnt from an on-line video that the touch screen can be very handy for waist-level shooting in street photography.

Many smart phones have a similar user interface, and this feature is not new to the OM-1. This is also possible with earlier EM-D camera models from Olympus. With the current version of the OI share App, when in Live View mode, this way of selecting the focus area and triggering the shutter also work through the phone screen.

Battery life

For photographing birds I have been using almost exclusively the Pro-Cap Sh2 mode. That is using the electronic shutter and the filling of the memory buffer with images while focusing. Sequential shooting with exposure adjustment, focus tracking and AI-based subject recognition at 25 frames per second, and image stabilization. Under this conditions I get about 7000 to 8000 photographs from a fully charge battery, filling in the process a 128 GB and a 64 GB card. This is an improvement compared to the E-M1 Mk II in spite of the OM-1 having a much more powerful data processor.

Using the mechanical shutter and taking a single photograph at a time, the battery discharges faster when measured as number of photographs. I haven’t tested battery life for video.

Concluding remarks

The most striking result of the many improvements is that the OM-1 makes it possible to take photographs of excellent technical quality under conditions that earlier MFT or FF cameras could not. In addition, it makes it possible to take photographs with a lot more freedom as the image stabilization is so effective and the different computational modes cater for various special situations. With a much better EVF and even better ergonomics that the already excellent ergonomics of the E-M1 series cameras the OM-1 is a camera I have bonded to very quickly.

Learning how to effectively use the many features and settings of the OM-1 takes time and practice, but all in all the increase in the number of “hits” per session has been in my experience incredibly large. The user interface plays also a key role as adjusting settings on-the-go without taking the eye away from the EVF works very well. After more than a year I am still learning new tricks that the OM-1 is capable of, but by now most of the settings for the types of photographs I routinely take have become automatic for me. I still have things to explore, as the features of the OM-1 camera facilitate many new to me uses and approaches.

I do not think the OM-1 is a camera for occasional use, it is a very powerful piece of equipment offering many possibilities. It is a camera that can adapt to many situations and types of photography, not by some magical automation, but instead by empowering the photographer to control how the camera behaves. This is true not only in relation to functioning, but also with respect to the user interface.

The camera works well with many different objectives, but excels with the newer M.Zuiko PRO objectives with built-in image stabilization that works in synchronization to in-body stabilization.