Nikon two-layers sensor patent

nikon-two-layers-sensor-patent2

2-layers sensor

Nikon has a new patent filed in Japan for a two-layers sensor with phase detection AF in one layer and contrast based AF in the other layer:

nikon-two-layers-sensor-patent1

phase detection direction

nikon-two-layer-sensor-patent

  • Patent: 2016-192645
  • Published: November 10th, 2016
  • Filed on: March, 31, 2015

Via Egami

This entry was posted in Nikon Patents and tagged , , . Bookmark the permalink. Trackbacks are closed, but you can post a comment.
  • Bring it on already. Let’s see at least some prototypes in operation accompanied by some ‘hard’ analysis (as opposed to condescending marketing puffery) of the pros and cons of the technology. It will take a combination of BSI / stacked sensor designs to make any significant gains in silicon-based sensor performance. Probably completely different materials and / or computational photography to get us beyond that.

  • RMJ

    Interesting how it seems to have different color mask on the other layer (magenta, yellow, cyan).
    If it would be possible to combine two color mask, it would surely have effect on color accuracy.

    Logically magenta (blue-red) seems to be the more common one on the other layer, as it normally is green. So all the colors are used equally again.

  • Wilson

    If it’s good enough for the human eyeball, its good enough for me. Does anyone with a better understanding know if this type of dual layer sensor would be used specifically for the autofocus sensor or if this would be able to be incorporated into the main imaging sensor if Nikon ever made a f-mount mirrorless monster?
    https://uploads.disquscdn.com/images/1c4ce1192efad6f65e0f8d4c6bda192bba5df5c66a25e920782708f29529b4e8.png

  • animalsbybarry

    This seems to be desighned for mirrorless cameras
    I am anxious for a new high end mirrorless Nikon camera….(which I hope will be E mount)
    But considering the financial difficulties and layoffs currently plaguing Nikon I have to wonder if we will see any major new Nikon products in the immeafiate future

    • if its e-mount I’ll eat my shorts. Nikon are too closed for such a thing

    • Gerard Roulssen

      Yeah, there will be two cameras; one in E mount and the other in EF mount. They’re still not set on X mount, though …

  • animalsbybarry

    The first ( horizontal ) color fitter layer consists of y, m, c which are primary pigment colors and respectively absorb only one primary light color…b, g, y
    These colors are directly below the complementary color filters

    I am not sure exactly how they are supposed to work together
    I will be interested in learning just how it is intended to work ?

    • MB

      Seams to me that it is the other way around … first is some kind of organic sensor layer that absorbs magenta and only passes green to second layer where you actually do not need any filter …
      Nikon already filled similar patents in the past and is developing this idea for some time
      http://image-sensors-world.blogspot.rs/2013/07/nikon-proposes-stacking-of-af-and.html

      • animalsbybarry

        Red green and blue are the primary colors of LIGHT ad the 3 you get white light
        Yellow magenta and cyan are primary PIGMENT colors ( process colors ) each absorbs only a single primary light color and transmits all other colors
        Magenta absorbs green light only !!! Therefore only red and yellow light will reach the green filter beneath it…except in the central gap

        • MB

          You are absolutely right of course but it seams to me that you are completely missing the point here because there should be no optical filters involved here … otherwise if there are, for example green and magenta filters above the sensor surface, you would get absolutely nothing as you are probably very well aware of …
          The idea is that you have one sensor layer that is capable of absorbing some light wavelengths to produce photoelectric conversion, blue and red for example, and the other beneath that captures reaming photons, lets say green … first layer would actually act as a green filter but the output would be magenta because there is no color separation between red and blue component …
          This approach, theoretically at least, has many advantages over Bayer setup … for example you would capture all available light not just 1/3, the values from first layer that detects complementary colors could be used for color correction for every single pixel because the sum of primary and complementary readings should equal sum of all primary colors and you could compute closer values than if just using adjacent photo sites, you would also get less reflection from the sensor because all light would be absorbed , and of course you could make sensor with cross PDAF as in this particular Nikon patent …
          Of course I could be misinterpreting the patent …

          • animalsbybarry

            My best Hess is that this arrangenent is to facilitate superior autofocusing
            The top layer allows a thin horizontal band of white light to pass through
            The second layer (green) absorbs all the light not captured in the first layer except for a very least band
            The resulting light reaching the lower pixel would be a cross with a horizontal bar of green intersecting a vertical bar of red and ble light. The center where they intersect would be a small spot of white light
            This center cross with a white light center would occur in a differrent color combination for the other pixel filter combinations
            I expect this will in some manor result in superior autofocusing suitable for mirrorless cameras, as well as provide some white light to the lower pixeles for improved sensitivity and dynamic range….but I am at this point only guessing.

        • Eric Calabros

          Its not like two filters sticked to each other. Green pixels in your camera lose 2/3 of color info. a magenta absorber above that, could get the otherwise wasted red+blue data and let the rest reach the green absorber beneath. It still needs some interpolations to find discrete values for red and blue, but the unknowns will be less, so result is more accurate. in theory should even increase the SNR, but in practice decreases it because of additional wirings and shading and leaking and whatnot. Maybe Nikon has been waiting all these years to solve all of the problems plus focus in a slam dunk invention, but I wouldn’t bet on that.

  • Aldo

    This seems to be the building blocks of nikon’s first pro mirrorless camera.

    • Thom Hogan

      I don’t think so. It appears closer to a lightfield device with small sensor in the patent information. Moreover, it’s CCD for some reason.

      • Eric Calabros

        Because its likely about DSLR linear AF sensor.

        • Balder the Brave

          Nikon have been working on Dual pixel idea since 2012 (same moment Canon
          published their own first patent). You can find those Nikon patents if
          you search with Google. But lately they have been doing a lot on the 2 layers imaging sensors (this year, more than 4 patents : AF, AE, global shutter,
          stacking sensors). They said they would launch mirrorless cameras when
          they think they are ready. Seems that something Big is coming !!!

        • Thom Hogan

          I suspect you’re right.

        • KnightPhoto

          What’s a “DSLR linear AF sensor.”?

          • Eric Calabros

            The sensor in AF module.

      • Carleton Foxx

        Because CCD is still the better tech when it comes to color rendering?

        • C_QQ_C

          For me it looks more looks like Nikon stepping into the video cam market with ( hopefully) a cam with exchangeable lenses like the BlackMagic series , that would be really interesting (Whishfull thinking.. 🙂 )

  • Oh man, they’re having trouble bringing a camera with a one-layer sensor to market (still waiting on my DL), and now this? Maybe they should start working on bringing decent 4k tools to the prosumer (and above) line, and then a wifi+software solution that doesn’t suck to… every camera. Are we even sure this isn’t for one of their industrial optics tools?

    • Fly Moon

      So because they didn’t bring a camera for you in the 2nd half of 2016, now they should do any R&D?

      • Well, let’s hope we get a chance to see the new R&D in a product. I just think there’s some tech in the cameras right now that needs improving, and some R&D (i.e. KeyMission) that should have never happened.

  • Jim Huang

    Can anyone please explain to me the difference between this and Canon’s duel pixel AF? They seems pretty much the same to me.

    • RC Jenkins

      I’m not sure, but believe that Canon splits ‘single’ pixels on the same 2D plane into individual photodiodes and then uses them for both Phase detection (comparing the pairs) and imaging (where each pixel performs 2 functions). I’d assume that this means they’ll detect both phase and contrast differences on the same plane (horizontal or vertical)–not “cross-type”.

      This seems to be for two different layers (3D plane), each performing a different dedicated function, and each being rotated 90 degrees with respect to the other. One layer houses split pixels (only for phase detection), while the second layer houses imaging pixels (contrast + imaging). Because these are split and rotated, it seems that the phase detection will be on one plane (say, horizontal), while the contrast detection will be on the other plane (say, vertical). This means it’s closer to “cross type” autofocus sensors and could offer more accuracy (and perhaps some speed).

      Functionally, they should do the same thing. Performance could be different in different scenarios.

      Pros / cons? I’m not sure–but diffraction springs to mind (in both).

  • CaMeRa QuEsT

    Admin, it looks more like a layer of vertical phase detection/magenta-yellow-cyan-magenta sites and a second layer of horizontal phase detection/gree-blue-red-green sites. So this tech one-ups Canon’s DualPixel tech (how Nikon would call this, QuadPixel? DeepColor?), and if Nikon can really pull this off, then oh boy oh boy, it will make for quite an awesome picture. Also this will be their biggest tech hit since the FA’s introduction of Matrix Metering. Indeed, proof that the Nikon Imaging division is still alive and well!

  • I think this is nikons first step to make a fully function and fast working liveview in there dslr and secondly maybe introduction to a new mirrorles system. Can’t wait to see how they will implement it on their tech.

  • sickheadache

    I am an old lady, even Darrin could not explain it to me…help..before I get a sick headache.

  • tobi

    LOL guys a patent in no way means this tech is imminent. Nikon has at least 4 other nice sensor techs patented that has not come to past yet in the last 10-15 years that I know of. .. so dont get your hopes up. even is this is “on the way” it will be at least 3-6 years to get it to production prototypes.

  • Zak Zoezie

    So if the light needs to be distributed (maybe or maybe not equally) over 2 layers, I guess the top layer must be translucent, right ? Where did I hear that technology before …

  • What I hope for is for Nikon to produce a new and true AF module for a full frame camera. Until now, they are using an AF module designed for DX cameras, that’s why the AF points are concentrated too much in the middle of the frame.

    Too make it simple. A D500 AF coverage for a full frame camera.

    • Joe Schmidt

      Do you really think the flagship D5 has an AF module designed for DX cameras? You must be kidding!

      Autofocus System Design is not as easy as one may assume. Here’s a little very good background knowledge: https://www.dpreview.com/forums/post/54211961.

      Short summary: Trying to set AF points closer to the image borders just gets one into trouble with the limited AF sub-mirror size, lens distortion and aberrations, and eclipsing of the exit pupil. Those are the main reasons why we don’t have wider coverage of our FX sensors.

    • Shutterbug

      I am not sure you understand why the coverage is how it is. In order to increase AF point coverage much more in FX cameras the way it’s currently done, there would need to be a physical redesign of key parts/structure of the camera. Light comes in, is reflected down to a smaller mirror, which is reflected onto the AF sensor array in the bottom of the camera. That light path is physically smaller than the sensor and thats why you don’t see 100% FX coverage on ANY body with a traditional AF system, not just Nikon. You’re right coverage isn’t 100% (it isn’t on any body, Nikon actually has the best FX coverage with traditionl AF), but they are not using a DX-only module. What you suggest for D500 coverage on a FX camera is not possible with traditional AF without some sort of complete redesign that nobody wants to do, probably for reasons we don’t even know yet.

      • Joe Schmidt

        Even
        if the camera body could provide phase detection AF in the far FX corner physically
        it probably would work only with a few top-of-the-line prime lenses. Take a typical
        FX and DX lens and look at the remaining resolution and occurring aberrations towards
        the respective image corners. A “standard” FX edge is currently not really usable
        at pixel-level. Finding the best focus in the area of the FX corner is a challenge,
        even if you have the full RGB information…

        • 24×36

          I think you’re overblowing the issue big time. If you were for whatever reason actually placing your subject on the borders of the frame, a focus point out where your subject is would be what you want. Now why you would want your subject crowding the edge of the frame is another whole discussion…

    • Mike

      Man, you would have been disappointed with Nikon’s flagship film cameras back in the day. 5 point AF or less. Less than 6fps with a “buffer” of only 36 RAW photos before having to change the memory media. Lol.

    • 24×36

      You’ve got that backwards – DX cameras use an autofocus module designed for FX cameras.

  • TwoStrayCats

    Your Turkey was screaming all night long. Thank the gods we don’t have to work today.

  • Adam Brown

    So meant for small sensor products or a large sensor mirror less?
    Saw recent interview where Nikon indicated they are working on OSPDAF in their dSLR live view, but no indication of when it will be ready.
    Sony and Canon both offer a much more usable live view than Nikon.

  • Pep

    True, reflex cameras are such a kludge…

  • Balder the Brave

    One should just give a look to this article :

    http://www.imaging-resource.com/news/2016/12/11/patent-roundup-a-conversation-with-dave-etchells-about-recent-sensor-innova

    ” DE: Actually the first thing I noticed was that it’s not showing a combination of phase-detect and contrast-detect layers, but rather two
    phase-detect layers, one over the other. And it’s very clever, what
    they’re doing; it depends on a translucent organic sensor layer. The
    pixels are split in half, as is the case with Canon’s Dual-Pixel AF
    technology, but here they have two layers like that. On one layer
    they’re split in half horizontally, on the other they’re split in half
    vertically. Even without reading the text, the illustrations immediately
    imply two phase-detect layers. Even more interesting and particularly
    clever about it, the organic layer is going to be semi-transparent. It’s
    going to pass some wavelengths of light onto the silicon sensor below
    it.

    AA: So what are the potential advantages of this layout?

    DE: The main advantage is that it’s on-chip phase detection, and every phase detect element is a cross-type sensor as opposed to only horizontal or vertical; you would be able to have phase detect autofocus in live view for SLR cameras.

    Also, you would automatically get really great greyscale images,
    because you are in fact capturing all the light at each pixel. Assuming
    that the two sensors were equal sensitivity and had equal photon
    efficiency (an admittedly big assumption), if you add together the
    signal from the green and magenta, you’ve got white. You add together
    Blue and Yellow and you’ve got white. In a conventional sensor you have
    to demosaic it, and then convert to black and white; here you’ve got raw
    luminence data from each pixel. ”

    If Dave Etchells (Editor in Chief of Imaging Resource) have had the
    opportunity to read the full patent (he had just seen the 3 first
    graphs), he would have seen some of the key concepts behind this patent :

    – Not even Dual Pixel like Canon patent…. but Quad pixels and cross type AF points on full sensor

    – capability to have steréoscopic images due to parallax

    – and my guess : More robust predictive AF mode algorithms for PDAF cause having informations about directions, depth, …..

    Nikon 1″ PDAF was already acclaimed for been the first very good
    implementation of PDAF on sensor (hybrid PDAF), but seems like with this
    Patent, Nikon may succeed to put that Dual Pixel Technology on a Level
    for professionnal photographer (cross type AF points all over the sensor
    in Live view for DSLR) !!!

    The question that may arise is …… What would be left for APS-C or
    Full Frame mirrorless camera if DSLR can works better in live view
    …… than the best mirrorless !!!

    size and cost of fabrication ?

  • KnightPhoto

    Very interesting, thank for ferreting that out! A good Nikon DSLR with latest 153 point PDAF and OSPDAF as an addition would be a very important development indeed. Add in an optional EVF and I would never need a mirrorless. Although I’d take a mirrorless implementation too if that is where it shows up.

  • Back to top