>>Hello, everybody, everyone in this room and

hopefully lots of you out there in the virtual world. I’d like to introduce Alan Zhan who’s in

the Department of Physics, PhD student who is

soon to be graduating. His work is in

the very exciting field of Dielectric Metasurface optics. So that’s what he’s going

to talk about today. The UW group that he’s

in is ARCA [inaudible]. Fantastic work and everyone should check it out

on the web. Thanks.>>Thanks for

the introduction, Joule. I’m Alan and I’m

a physics graduate student. I’d like to present some

of the work that I’ve done under Design and Optimization of Dielectric Metasurface Optics

in addition to presenting some of the context

that my work fits into. Before I want to talk of

metasurfaces, I’d like talk about optics in general. The first thing that

comes to my mind when I think about optics are pictures. Many metasurface

researchers including myself are very much

in the business of making pictures whether they’re

for human eyes like these, which help me because I have

bad vision to make images, or if we’re interested in imaging large things that are far away

like these using this telescope, or if we’re interested in imaging

small things that are relatively close using this

microscope objective. So these are all some pretty nice optics and they

work really well, but they’ve been known

since the late 1800s. As optical researchers,

we always want to push further to get

better functionalities, and so we’re currently pushing our optical hardware

in many different ways, and in my research, I want to push it in terms of

miniaturization and functionality. So first talking of miniaturization, there a lot of

applications for meeting compact optical systems

including Internet of things, smartphones, lidar for

autonomous vehicles. One thing that is actually

really interesting is in fundamental biological studies where this is actually the mini scope. It’s a microscope that’s about

the size of your fist or smaller, and it’s small enough and light

enough such that you can put it on top of a mouse’s head. It can actually monitor neurons

firing in vivo in real time. That makes me a little

squeamish but it’s also really cool for super research. So another thing that

we’re really interested in is increasing functionality. I think a lot of people here

will recognize the Kinect, which is one of the first commercial

products that really pushed 3D imaging and depth sensing. Something that is also

really interesting for us is this idea of

passive optical computing. So here is a wavelet

transform of a scene, and one of the things

that we want to do is, can we think of optical elements as performing some passive

optical computation? This can be as simple as

something like a lens that produces a Fourier transform

at its focal point, or maybe something more

interesting like a classifier. So something like an image

recognition task. Can we make these passive

optical opponents do these tasks without using

any electronic power? Of course, I’m here to tell you that metasurfaces are the solution or at least part of

the solution to all of these problems that we’re facing. So when we talk about metasurfaces, metasurfaces are actually

form of diffractive optics. When we talk about

diffractive optics, what we’re really concerned about

is the wave nature of light. So we’re no longer thinking

of light as a ray, but we’re actually

thinking of it as a wave. Given that it’s a wave,

we have two things we can control; amplitude and phase. So amplitude diffractive optics

such as the zone plates. This zone plate is

designed to focus light at some finite distance away from it. It functions by blocking light that doesn’t interfere

at the focal point, or in allowing light that will interfere constructively at

the focal point to pass. It works quite well for a lot of different applications

like X-ray lenses, but ultimately if you want

an efficient optimal device, you really don’t want to

block half of your light. So metasurfaces are generally implemented as phase

optical elements. So in this case, this is

a diffractive phase element. It has multi-levels,

and then you can see that each of these levels

has a different thickness, and this thickness correspond to different discrete phase shifts that the light experiences

as it pass it.>>Is that an actual picture?>>That is not a metasurface.>>Okay.>>It is an actual picture.>>It is an actual picture.>>Okay.>>Fabricated diffractive

optical element. So if we want to understand

diffractive optics, we want to go from refractive optics

to diffractive optics. One easy way to do that

is to consider a lens. So if we consider a lens and

some wave optics picture, this lens has some refractive

index n and it has some spatially varying

thickness along this vertical axis,

which I’ll call x, and we can describe

the phase of light with some wavelength Lambda

passing through this lens as some 2Pi times refractive index

divided by Lambda multiplied by some spatially

varying thickness of the lens. As you can see from Wikipedia, we have a plane wave incident

from the top of the lens. The waves that are incident towards the center of the lens experience a larger phase delay

and they are delayed, whereas the light that’s incident on the edges of the lens experience a smaller phase delay so

they’re allowed to advance, and this actually causes

a focusing effect in the far-field. So now that we understand

light as a wave, we also know that from

a signal’s perspective, only phases between zero

and 2Pi mean anything. So we don’t have to actually think about this entire lens as

this entire optical element, we can cut away a bunch of the bulk. We’ll get something that looks

something like a Fresnel lens. This Fresnel lens performs basically the same functionality

as this conventional lens does, but it’s not very compatible with our conventional two-dimensional

lithography practice. So if we want to do

top down lithography, we can’t really make

these smooth curved surfaces. So what we can do is we can discretize our element

into multi-levels. So in this case, again, we have some spatially

varying thickness but now is of a discrete nature. So each of these discrete levels,

there’s four of them, can implement

any discrete phase shift. So we’ve gone from a continuous curvature lens to a multi-step diffractive

optical lens. These work quite well, but it turns out that using our conventional top down

lithography practices, the first one is almost impossible

to make and the second one is still very hard if we want

to do four face steps. We need to do four different steps of lithography and

four steps of etching, and this gets complicated

pretty quickly because in general you want

more than four phase steps. If you want eight you

have to do eight. So one way to make these diffractive optics

compatible with these top down lithography

practices that Intel uses, is you can think about

a binary grading. So in this case, we’re no

longer achieving phase by modulating the thickness

of our element, we’re now achieving phase by spatially modulating

the refractive index of our element. So in this case, you can see there’s this n_effective that replaces the n, and now this n_effective

as a function of space. You can crudely construe this to be areas where

there’s more material, n_effective is larger so it

experiences a larger phase delay. Areas where n_effective is smaller, you experience less phase delay. Areas with this less material, n_effective is smaller and you

experience less phase delay. Yeah.>>Potentially a stupid question. Why the gaps between them? The original continuous

design on the left, it does have notches where

the thickness is nearly zero, but on the rightmost side, you have lots of air gaps in between.>>Yeah. So you’re talking

about these air gaps?>>Yes.>>This is actually not

a very good picture, I guess, but in this case,

these air gaps, if you have some

spatially varying grading that has some specific phase

response that you’ll get from it, you essentially just

modifying the duty cycle. The bigger the air gaps are,

the less material you have, and the smaller your effective

refractive index is going to be.>>He’s basically [inaudible]

around it [inaudible] right. So wherever you have a

cross or the thickness, it goes rounds to zero.>>Oh, I see. So that makes sense. Yeah.>>So then when we talk

about diffractive optics, we also have to talk

about diffraction orders. In general, if we have

some diffraction grating with some periodicity capital lambda, and some incident light

with wavelength lambda is incident on this grating. As it’s transmitted, it not only it goes straight

through but it also gets diffracted into all these

extra orders, and this is true. In general, if

your grading periodicity is greater than

your operating wavelengths, and if I’m making

something like a lens, I really want my light to go straight

through into the focal point and all this extra light

that’s getting wasted is just costing me efficiency. So this is something that

we can actually solve by reducing our grating periodicity to below the operating wave length. In this way, we can actually show that all of these higher orders

of diffraction are completely suppressed and

this qualification is actually what brings us from diffractive optic specifically into metasurface optics. So these are called sub wavelength gratings or zeroth order gratings, and if we wanted to modulate

the phase using these gradings, if we want some uniform phase shift, we can send some plain wave at a uniform grating

and then we will get some uniform phase shift and

uniform plane wave exiting. If we want to have

some spatially varying phase shift, we need to spatially

modulate our grating. So in this case, the thick

or the duty cycle of my grating has a linear ramp and that corresponds roughly in most cases to a linear phase shift, and you can think of

a linear phase shift as something like a beam deflector. So in the top, we have a higher effective index or we have denser material so we have a higher reflective index and that means that delay is delayed more. So dielectric metasurface optics is a body of research that goes

back to like the mid 90s. This is one of the first works

that is demonstrated a high efficiency optic and it was made and titanium oxide

all the way back in 1998. More recently, there has been

work with silicon gratings. This one’s from HP labs. There’s also been work using silicon cylindrical posts

from Caltech, some rotated titanium oxide

nano fins from Harvard. These Gallium nitride

pillars that are rotated, and also they change the duty cycle. This is a collaboration between the University of Nanjing and Taiwan, and also some more recent work with these really strange looking

silicon pillars from Columbia. So all of these look a

little different that all lenses except for

the first one on the top-left, but there’s something

that is very consistent here and it’s that we

have some pure lattice. We have some regular lattice. In this case, the one on

the top middle is hexagonal lattice, and below it is

a rectangular lattice. So there’s some lattice and

on these lattice points, we put some dielectric structure, and this dielectric structure

has some degrees of freedom. We can be changing

the radius of the pillar, we can be changing the rotation of a nanofan or we can be changing

the geometry of these pillars. By changing the geometry

of these pillars, we can achieve different phase shifts and in general, also

amplitude shifts. This constitutes a large complex

system that has a large number of degrees of freedom that we are able to play with

and not only is it complex, but in general, these could also

be couple degrees of freedom. So it’s difficult design

problem, and really, what we’re interested in is

if we have this metasurface, how do I best implement a given

optical function on a metasurface? That question boils down

to how do I best take advantage of the large numbers of degrees of freedom

available to me? Just as a ranging number, a relatively small metasurface is about a hundred micron

by a hundred micron, and if you have

a grating periodicity of around 500 nanometers that has at least 4 times

104 degrees of freedom. So that’s like if you just changing a single parameter in each unit cell. So if we want to solve

the design problem, I think there’s two general ways where we can think about

solving the design problem. One is Forward design which is intuition-based which I would

argue is more intuition-based. So from here in forward design, essentially, we need to calculate all the properties

of the scatterers that we’ll use. So

we’ll calculate it. We’ll just have some lattice,

we’ll put some scatterers on it, we’ll calculate all of

these individual scatterers and their properties, so their amplitude and their

phase transmission coefficients, and then we have some functionality

that we want to implement. Maybe it’s a lens and we know how

to implement that functionality. We know that a lens

is some hyperbolic or some quadratic

face profile that we can implement using these scatterers, and so that’s something that

has been very successful. There’s another way of doing it, a complimentary way, it’s called inverse design which I’d argue

is more computationally based. In this case, we may have

some functionality that we want to achieve but we don’t really have a specific phase

profile distribution. We don’t really know

how to get there, but we can define

this functionality and we can encapsulate it in terms

of some figure of merit, and after we accept encapsulated

into configure of merit, we can use some optimization

Inspired Approach to actually arrive at the distribution of scatterers that shifts

our functionality. So as far as I’m going to cover, first I’ll cover some work on single element metasurface optics that I’ve done and

also some other work. Then I’ll go over

some metal surface optical system. So this is like

two metasurfaces or more, and then I’ll go over

some inverse design and optimization of metasurfaces, and lastly, I’ll go over some of the future work in Outlook

that I’m interested in. So my group uses silicon nitride for our metasurfaces

primarily and that is motivated by four major reasons. One is its high refractive

index, a brand two. By high we mean higher than glass. Two is it’s relatively wide band gap. It has a band gap of

around 4-5 electron volts which puts it in the UVA band. On the left is a picture of silicon nitride piece or

a thin-film silicon nitride. So you can see that it’s

actually transparent. Yeah.>>Why is wide band gap

important or useful?>>So materials like silicon

are opaque to visible light, and if you’re interested in making a metasurface that’s

transparent to visible light, you need to have something

that has a wide band gap. Does that answer your question?>>Yeah.>>Okay. Third, it’s

potentially CMOS-compatible. So that means that you could

potentially use NTLC CMOS foundries to produce these metasurfaces or

other CMOS-compatible foundries. In general, silicon

nitride is used as a hard mask but it’s also

possible to etch it, and it’s capable of making these photonic nanostructures that require strict

fabrication tolerances. So these two are pictures of

a nano beam photonic resonator and also photonic ring

resonator that were produced in our lab using

our silicon nitride. Forth and maybe most importantly, it was readily available in our local clean room

and there were already etching recipes developed for it so I didn’t have to do any of that work. So now that we have our material, we need to perform

a parameter search, and one way of doing this is using rigorous couple

of wave analysis. This is a frequency domain method. So you send one wavelength

in at this set, and it assumes that you have

some unit cell that it’s infinitely periodic in all space and it’s

a four-year domain method as well. So what happens is you

split this structure into different layers along the direction

of the light propagation, you split this structure

into different layers. So for us, that is along the line

of the thickness of the pillar, and in the in-plane, you actually expand

the refractive index in a periodic set of

four-year series, and then you can solve

this and then you set your solution in terms of

some set of four-year modes. So to begin, we start by

defining some unit cell. Here, I’ve defined a square

with some periodicity p, and I’ve placed

cylindrical scatterer with some thickness t and some

diameter d in this unit cell. What we do is we run our simulations and while we’re running

our simulations, we keep T fixed because that’s how we keep our compatibility with

traditional top-down lithography. But we can vary d as much as we want. So for a given

periodicity, we vary d, and then after we run

all of our simulations, we can arrive at something like this, where now we can see that this pillar has

some real amplitude response and some real phase response. So in general, if you want a

high efficiency metasurface, you want a high near

unity transmission amplitude, and that’s what these

parameters show in blue, and in general if you

want a metasurface that can implement

any arbitrary spatial face pattern, you need to be able to cover zero

to Pi and that’s what red is. So now that we have a parameters, this set of parameters on the right is actually a set

of parameters that we used in all of the following silicon nitride

surface demonstrations. We want to implement

some phase profile. So in forward design, we generally know

what kind of face profile we want to implement beforehand. So we have some face profile as a function of

some spatial coordinates. In this case, I chose to use circular coordinates

with r and theta. We have some wave vector

two Pi over lambda. So this is focusing

vortex steam generator. The first term is just that of a

lens with some focal length f, and then the second term is

some angular momentum term that determines

how many singularities there are in the phase profile. So if l equals one, there’s one for singularity. If l equals two, there’s two, and these corresponds to different quantized orbital

angular momentum states that you can generate. So if I just calculate

this face profile, I guess something that looks

like this for l equals one, and you can see that there

is a discontinuity that starts from the middle and

it goes towards the left, and this is a continuous

phase profile but in general, we know that only phase values between zero and two Pi are physical. So we can do a mod

operation and we get this nice little vortex picture. Again, the finest we can sample our phase profile is at

the periodicity of our lattice. So now, we discretize our phase profile into the periodicity of the lattice

that we calculate. So now we have some discrete blocks and that are the size

of our periodicity.>>Which size can you get down to?>>So for these specific

parameters our PDC is 400 nanometers, 440 nanometers.>>What optical function

will you try to enable this pretty

good phase profile? This is a vortex beam generator. It creates a little.Net profile that has applications in

[inaudible] microscopy.>>Nice. Cool.>>Okay. So we have our face profile

and now we can just essentially do a

one-to-one mapping from our phase to a diameter value. On the right is where we actually

get when we do this mapping. We can simulate this and FDTD. So we can simulate

scaled-down structures in FDTD which is finite

difference time domain simulation. On the top left, you can see the little door knob profile

that this vortex beam generates. On the bottom left, you can

see a cross-section along the optical axis showing that

the door knob profile forms around 25 micron and on the far

right is an example of a structure that we simulate, where the yellow is a refractive index of around two and the blue is

a refractive index of one. So this shows the meshing that FDTD does when it

stimulates your structure. So it doesn’t actually simulate

perfect circles but it stimulates these rectangular blocks

that make up circles.>>Just to verify. So that’s the geometry

at the end of the day?>>Yes, that’s what we’re

going to fabricate.>>Those yellow dots

are the actual radii, the T values you were

talking about. I see.>>So in general, we simulate very scaled down versions of

these limits and receive.>>Each yellow block, it has many many smaller

features in it, right?>>Each yellow block

is just a cylinder.>>Just one cylinder?>>Yes. I can show you the picture right here. It’s

actually not very good. But each yellow blob is a single

cylinder and it has some diameter and essentially what we did here

is that it’s consistent to that.>>So actually if you

look at the scale, the scale is quite small in

the previous diagrams, correct? So you’re actual lens

is very small here.>>In this case, this lens is

about 30 microns in diameter. Yeah.>>I just have a quick question. So when you design the stuff, you said you keep the P fixed.>>Yeah.>>So you’re changing

the [inaudible]?>>Yes.>>Okay, and that’s

the only thing you changed.>>That is the only thing we

change for this demonstration. There has been other groups

that have done more with different unit cells

and that’s something we’re also working on in my group.>>So a larger yellow blob just

means a larger [inaudible]?>>Yes.>>Okay.>>Do you recall what the cell size was for the [inaudible] simulation?>>Lambda over 10n which makes

it around 25 nanometers.>>So it was determined

by Lambda, right?>>Yes.>>Okay. But that was still much

smaller than the value of d you get about because

the shape is going to get quantized and steelcased when

you’re doing the simulations.>>The smallest radius

pillar that we fabricate was probably around 150

nanometers in diameter.>>Okay.>>So roughly seven unit

cells for instance.>>Okay.>>Yeah. So we were actually able to fabricate

these structures. Here’s a lens. We can see that this lens

is designed for 250 micron. You can see that there is some

finite focal shift with this lens, and that’s actually because

we designed this lens for 602 nanometers and

we tested it with, or I tested it with an LED that

has a very large bandwidth. So we actually ended up getting

something that looks like this. There’s a focal spot of the lens. It looks nice and mostly

circular, vary on aberrated. We also made the

vortex beam generator, which looks like this. Then here’s an example of an intensity profile before

the vortex beam focuses, and then this is an example

of the vortex beam itself, where you can see that donut beam

profile actually being formed.>>That’s still with LED?>>This still with LED.>>How come you didn’t use

a [inaudible] you could get it.>>So one thing we are

interested in that we didn’t really understand

at that point when were doing this research is if whether or not we need

the coherent light to make these structures work. So naively we would think that if

we’re playing with phase maybe we need to be playing

with coherent light in order for this to work, and that was something that

people hadn’t really tested. So we were like, we should test

with an LED and see what happens.>>I just realized. So when you’re doing this optimization of FDTD, are you doing it for

a single frequency like most monochromatic

for all of this?>>Yes.>>I see.>>So these elements work quite well. We have lenses that work. They’re ultra-thin, they have

small focal lengths. They work well. The problem is what you guys

are heading at is that we have very large chromatic aberrations and these are characteristic

of any defective optic. So what ends up happening is

that we design our wavelength. We design a lens for

a wavelength of 692 nanometers. It focuses pretty close to

there for red color. But we observe as much

as of the 50 percent focal length shift over

our entire visible spectrum. So for blue, yeah.>>How long does FDTD simulation

and optimization process take for something of this size?>>So FDTD isn’t optimizing anything. It’s just simulating some

structure that I give it to. I would say that it takes

around 20-30 minutes.>>For each evaluation or for

the entire optimization multiples?>>There is no optimization

that’s happening. In this process we have a phase value and we pick a diameter that corresponds

to that face value. Then we can simulate the

structure using FDTD, just as a check that it would work. We don’t actually optimize an FDTD, and it’s impractical

to optimize an FDTD. But that’s something I’ll go over. So yes, large chromatic

aberrations characteristic of defective optics, not

good for imaging. So if you’re not familiar

with chromatic aberrations, the picture on the top is a sharp picture that has

very little chromatic aberration. The picture on the bottom is

chromatically aberrated and this is chromatic aberration

associated with the refractive lens. So if a chromatic aberration

associated with a diffractive lens would actually be worse than it will

picture on the bottom. So that’s a serious problem

that we want to correct. So that brings me to the next topic, which is Correcting

Chromatic Aberrations. So there’s been a lot of work

in correcting operations. This is a problem that

the [inaudible] community has been very interested in over

the past few years. So these works all came out in 2018. They do different things but what they’re really doing is they’re doing something called

dispersion engineering. So the problem of Chromatic

Aberrations is a subtle problem. It doesn’t result from

the way that [inaudible] refractive optics exhibits chromatic

aberration which is some kind of anomalous dispersion

in your refractive index. This is actually a product of

the way that we wrap our phase. So when we do this mod

operation of mod two Pi for the wavelength of interest

that we wrap, it wraps correctly. But for other wavelengths, we actually might wrap too early or we might

actually wrap too late. This actually causes the chromatic

aberration and there are some associated phase error with this wrapping operation

that we perform. You can actually attempt to

correct for this phase error, and this is what these

groups have done, and you can see on the

top-left that they get some nice focal lengths or nice focal spots that are all

in the same area for this lens. But you can also see that

it’s a very extended focus and these are very low

numerical aperture lenses. On the bottom, you can see that

these are some of the devices or the structures that

they’ve engineered to perform this

dispersion engineering. So there are limits to this technique of dispersion

engineering. Yeah.>>What’s the intuition for why some of these structures might

help to reduce [inaudible]?>>So you can think of

these as wave guides.>>Okay.>>For different wave guides, we

have different effective indices that determine how these

different wavelengths propagate. So there’s a certain setup

of a different allowed modes that are allowed by these wave guides and they all have

different refractive indices, and they will delay different

wavelengths by different amounts. So if you build a very large library

of these wave guides and then you have a very large

number of different delays, you can think about just optimizing or using

your look-up-table to pick out. If red light needs some specific delay to match

up with the green light, can I find a pillar that

gives me that delay?>>Got it, and then these sorts of rectangular structures

then orientation. I mean, they probably have

an increased grammars space in terms of orientation. Or is it as you say

like a library, I mean, there’s just these fives

that they’re modified?>>So in this paper in the below, they have three generations. The one on the bottom they have

five primitive unit cells, and they can modulate

the size of the whole of this unit cell or

the thickness of like this.>>Okay.>>So they can make the pillar on

the far-left bigger or smaller, they can make the hole appear in the middle left bigger or smaller

and not on the pillar itself. I think you can see that from

what they’ve done there.>>Right. Got it.>>Yeah.>>So each one of these delays a different wavelength,

a different amount, has a different phase shift, and then how does that work? When you put them together, isn’t there still

some light that’s going through the wrong of these things, that’s getting the wrong delay? How does just that pillar provide the phase shift

for say, red light? None of the other pillars that

are tuned for green light, the red light doesn’t go

through those or is it some weird coupling

between these things?>>So first, nobody in

for design actually accounts for any kind of

coupling between the pillars, which is one of the problems

with this body of research. But in terms of how they know, what they’ve done is they basically calculate all of

the modes for all of these pillars, for all the interest wavelengths

they are interested in. So let’s say, pillar one gives

you phase shifts for red, phase shift for blue and

phase shift for green. You know that in order to

implement a perfect lens, your red phase shift needs

to be some certain function, your blue phase shift needs to

be some certain function and your green phase needs to

be some certain function. If you have a large enough

Library of pillars, you can choose the correct

pillars that will always produce those phases.

Does that make sense?>>Yeah. You choose a combination that gives you

the phase shift you want.>>Right.>>So this is basically a matter of how many degrees of freedom do I have to basically get

these different modes to get these different phase shifts. There’s a limitation

to it and it actually is limited by the height

of your pillars. So these different modes have different effective

refractive indices. But just having a different effective refractive index isn’t enough. If you want to have

a finite fish shift, you also need to have a thickness. What ends up happening is that this limitation is

defined by this equation. But essentially, what it

is if you want to have a numerical aperture or

some certain radius, you’re limited by the total

delta phase shift between these that you can achieve. So delta is the compensation that if you’re designing for green you can compensate delta for red or blue. What this basically says

is that if you want a high number cooperator lens,

in order to do so, you have to make

very very tall pillars or else you have to make

a very very small radius lens. So for this group right here, what they’ve done is they have these 800 nanometer pillars in

these 1400 nanometer pillars. You can see it, the

effective phase shifts that they get don’t actually correspond to much larger performance or

like much better performance. So this is something that my lab was interested in

solving at a different way, and we came at it from a

computational imaging approach. So in this case, we wanted

to find some face profile. In this case, we have face profile of just an

ordinary hyperbolic lens, and we add to that a cubic function. This cubic function

serves the function of creating an airy beam which

is a diffraction invariant beam. So as stated before, if alpha is equal to zero, we just get a lens and at

the focal point of the lens, if we design our lens for green, we have a nice tight focal spot, but there’s a significant blur

in blue and red. But by making

alpha some finite value, we create a roughly propagation

in variant beam. So we can see that at

our focal point now, instead of having

a nice point for red, we have an L-shaped point spread function for all

of these different colors and they look really ugly, and if you know the imaging is the convolution of your object

with the appointment function. But even though they

look really ugly, they all look fairly similar. So then that means that we can use

a single filter essentially to the department

de-convolution operation and maybe you can retrieve

the actual image back. So this is some very

basic wave for. Yeah.>>Because of this filter

function that filters is acceptable or [inaudible]

unacceptable so its [inaudible].>>Yes. But we don’t actually

use that property of it, we just use the Wiener filter

because I’m not a computational imaging guy

and this is our first work. But this is work that was

from the early ’90s from Edward Dowski and H. Watch and they basically laid the foundation

for this work and what we’ve done was so essentially we’ve combined the two elements into one, and we have demonstrated

that they can actually work with diffractive optical

elements as well. So for some experimental results, if we can make a singlet this is

just a singlet metasurface lens, you can see that’s symmetric axially. This is what an EDOF lens looks like. It looks similar, but you can see

that on the top and on the left. You get this L-shaped that appears, and that’s due to the cubic

functions that we’re adding to it. So these are some of

the imaging performance and color. So in this case, we have some

ground truth image on the far left. RGB, some rainbows and

relatively natural scene. When image with the

metasurface singlet we get, so the singlets designed

for green wavelength we get a very nice sharp green. But every other

wavelength is blurred. On the middle right we

have the raw EDOF image. There we can see that

all of the images are relatively blurred using by this L-shaped point

spread function and on the very far right is the filtered image. So what we’ve done is we’ve

just used a Wiener filter, and we can see that at least for

the RGB the R and G and B are better defined compared

to the singlet for RGB the yellow is noticeably better, but there’s still these

like L artifacts and these we attribute to essentially

the accomplishing of our system. and also that we just

use a Wiener Filter. So that also comes with

some noise amplification as well.>>I’ll also just Docker, does that represent something like the efficiency of the system

or something like that?>>Yes, the efficiency of this particular system was a little bit lower than that of

the singlet lengths, but it has a more uniform efficiency

over the bandwidth because the singlet is more

efficient for Green. Yes.>>Is there a contrast problem here? Some not being focused by the lens, so there is a reason high

contrast on the right?>>So some of the lights

not being focused by on.>>I’m just wondering

because if the lens is less efficient you could just

take a longer exposure, Right?>>Yes.>>It has smaller exposure.

So why is there so little contrast on the very rightmost image as compared to the original images. So because you have

light leakage that’s, it’s not being focused in background.>>So these meta-surfaces

are around on 40 percent efficient and they focus around 40 percent of

the light that is transmitted into this area beam spot, and that’s a rough estimate of

how efficient are lenses are.>>So that the other 60 percent just get spread across

the whole [inaudible].>>Yeah. It gets spread across. Some of it definitely gets rejected

into the side bands and that’s the most notable source of

loss that we’ve noticed is that what we see is that we see, I think this is due to

a sampling issue that we have. Is that we don’t sample fine enough, we actually create different face

profiles called just based on aliasing and these aliasing effects inject light into

the side bands sometimes. I’m also not the best

experimentalist. This is some work that I

collaborated with Shane and Shane is the first other

and a very interesting stuff. So these are all single

metasurface works and I’ve covered some of

like color chromatic imaging, there’s a lot of other stuff

that’s being done. So there’s holography,

polarization optics, nonlinear optics and

some review articles. Yeah. There’s some cool

stuff that’s being done in a lot of

different fields and. So next I will cover something towards medicine

offers optical system. So this is two metasurfaces

together in tandem. So one thing that we demonstrated

was an Alvarez lines. So for those of you who

are familiar with it it’s two face plates that are obeyed

this these cubic functions. When they are aligned,

they have no function, they provide no optical power

but for some finite displacement in along the x dimension

that we call d, we get a tunable focal length. So the power is related to

one over the focal length, so the power goes up linearly

with the displacement. We can simulate how these work using some and

we’ve optics simulation. So for small displacements, we get a long focal length, and for a large displacements

we get a short focal length. Of course we experimentally

tested these. We fabricated these in

our cleanroom well. So our fabrication has gotten

better in the time pen and we’d get around three millimeters of total tunable focal length across a 100 micron of

physical displacement. Where 50 micron is displacement in one direction and each of

the plates is displaced 50 microns, so it’s a 100 microns to

physical displacement. In this case, this focal

length change corresponds to a optical power change

around 1,600 diopters. There’s other tunable

systems that are more monolithic that have been

demonstrated more recently. In this case, there’s have Palmer lens that they

use MEMS to stretch, and also pretty interesting

like doublet that is formed and they can change the distance between these two doublets or these two lenses to create

a tunable focal length lens. Yeah. So in this case, they have a monolithic system and they use these MEMS

devices to actuate. But what actually

ends up happening is that they require very high voltages on the order of 60 to a 100 volts to operate these and

also their relative to tunable focal lengths and

their relative optical powers are a little bit lower

than that Alvarez lines.>>How big are these lenses?>>The red scale bar is

about a 100 micron and the white scale bar

is about 20 micron. So this one is probably around

like 400 micron in diameter, the bottom one and the top one

is relatively large. So in addition to

tunable optical systems, there’s also systems

for angular incidence. The group at Caltech has been very prolific in making these systems. What they’ve shown is they’ve shown matters of infrastructural

reflectors, which have applications in

optic communications and such, and also really interesting

angular aberration correction. Because as these

metasurface lenses are generally designed for

straight on illumination, they have a small field of view. So what they showed is that by

fabricating another metasurface, they can correct angle aberrations

up to around 20 degrees, having a nice focal spot. Something that our lab has also

demonstrated is large area design. So we always claimed that

these metasurfaces are compatible with conventional

photo-lithography, and this is something that

we actually implemented. In this case, what we’ve shown are these large area Alvarez lenses. In this case they’re about

a centimeter by a centimeter. We can perform

some very focal imaging. Here in the bottom

is a great work done by the group at Harvard that

also has done something similar, and they also have a centimeter

by centimeter lenses. So these active elements are compatible with

traditional lithography, and you can actually

make them quite big and quite easily using

photo-lithography. So now I’d like to go over some

of the work that I’ve done in the inverse design of

these optical elements. So again, we have

this design problem. We can use forward design

which is more intuition-based, or we can use inverse design which

is more computationally based, and more of an optimization

inspired method. That’s what I’ll be talking about. So it’s formulated as

an optimization problem. Mathematically, if we’re given

some figure of merit f of x, where x is a function of

some set of parameters, p, we want to minimize f of x, while constraining x to solve some linear system of

equations, ax equals y. So for a physical representation, f might be some

intensity distribution that we want to achieve

in the far field, x might be the electric field that the figure of merit is

expressed as a function of, and p might be the radius of our cylinders or it might be the dielectric permittivity

of our system. So while we can’t change x directly, we can change p, and by changing p, we can get x. That’s what ax equals y, it’s like the physics of the system. In general, we use a gradient-based

solution to do this. So here’s a layout of

the optimization procedure. We start with some initial condition, we solve our forward problem, we calculate a figure of merit, we solve our inverse problem, and then we calculate our gradient, and we update continuously until our figure of merit

reaches some exit condition. So this is a field that has been growing in popularity in

the nanophotonics community. One of the first demonstrations

that caught a lot of attention was this

wavelength demultiplexer. So what happens is you have two wavelengths incident

from the top-left, and those wavelengths are split

into these other two waveguides. So one wavelength goes

into the top waveguide, and one wavelength goes

to the bottom waveguide. In addition, we also have some demonstrations of

two-dimensional metasurface lenses. So what this is actually done in

the radio and microwave band, and they’ve shown

a highly micro aperture lens and also a more normal lens. These are two-dimensional lenses, so they focus light into a line or also known

as cylindrical lenses. Here’s a group work

showing two devices; one is a high angle beam deflector, and one is a wavelength

demultiplexer in frees pace. So what they’ve done is they have

these unit cells that they tile, and they design a space of

around two micron by two micron. So what you notice

about these inverse design demonstrations

is that they all tend to be either limited to small volumes, or

two-dimensional designs. So the first one is a two micron by two micron by a few hundred nanometers area that

they’re designing, the bottom left is

some two-dimensional design, and on the right they’re designing some periodic unit cell that

they tile on a larger area. So while these methods all result into

different kinds of structures, they all rely on

the same underlying method, and it’s a finite difference method. So we can solve these finite

difference methods in the time domain in which we

saw at some initial field, and we propagate it through

using Maxwell’s equations or using Faraday’s law

or Ampere’s law, or we can solve it in

the frequency domain, where now we’re solving

the vector wave equation, and we can form our equation that’s like ax equals b using

this wave equation. But the issue is that because we’re measuring all of our design space

into a finite volumes, the memory scales with

the volume of the system. So for large systems, the scales poorly, and it becomes

very untenable very quickly. So how do we get to

large-scale optimization? There’s two main challenges

that we have to overcome. One is we need a fast and memory

efficient simulation method. We can’t use a finite

difference method for a large volume without doing some tricks because it just takes up too much memory

and it becomes too slow, and we need to run a light iterations

so it also needs to be fast. Two, we also need to faithfully

numerically simulate the system. So in this case, we want to be able to capture

all of the electromagnetic of the system to be able to have the most robust optimization

method possible, and take advantage of

all the physics that we can. So the idea is that we had was to achieve both with an analytical scattering theory, and this is actually called the generalized

multi-sphere Mie method. So what we gain from this is we

gained an analytical theory. So in this case, we have

a scattering theory that is exact. We can calculate

the inter particle couplings exactly.. All of

these scattering functions are easily computed

mathematical functions, so we can actually

calculate them very quickly instead of storing

finite difference matrices, so our memory usage is also lower. What we lose is we lose our flexibility in designing

arbitrary scatters. So well, the groups before were able to make

these arbitrary scatterers, now we’re restricted

to spherical scatters. So what we’re doing is we’re

optimizing arrays of spheres, and we’re changing their radii to

achieve some optical function. We’ll show that we can actually do some quite cool things with this. This method is also easily extended into scatters of larger dimensions.>>So spheres are three-dimensional?>>Yes. So this is

a three-dimensional method.>>Okay.>>We’re optimizing, yeah.>>I guess you will probably

talk about how you reduce that to two-dimensional fabrication.>>We use the nanos right.>>Okay.>>Yeah. So the forward method

is also already implemented by a group

from KIT called Celes, and it solves this matrix

system of equations, so linear system of equations. I was able to contribute

a little bit to this by allowing this code to solve

spheres of different radii. So before it was just spheres

of the same radii, and it’s been proven to be

able to solve systems with spheres numbering up

to around 100,000. So it is relatively large-scale, we can simulate large three

dimensional distributions of spheres with this code, and that was a good place to start. So one, we now need

to find application, one thing that was really

interesting to me was depth sensing. So I was really

inspired by this paper, where they have some point

spread function that varies as a function of defocus. So these two lobes rotate in

space as you defocus a system, and if you are able to accurately characterize

the rotation of the space, different images at

different values of defocus will be convolved with

different point spread functions, and then if you can deconvolve them, you can get some depth map. Then they can do it actually

with very high resolution. So we want to do something similar, but instead of doing

a continuous two rotating lobes, I chose to have

one focal spot that rotates around four different

values of defocus, and it rotates in a discrete helical pattern

with eight focal planes. So our first focal plane, I want to focus light

at the yellow point, and I want to minimize the light intensity that

goes to the blue point, that’s because that blue point is the location of

the next focal plane. So at the next focal point

which is at 120 micron, the focal point moves

counterclockwise, and now I want to maximize

again intensity at the yellow point and minimize

intensity at these two blue points. At the next focal-plane, I do the same thing and at the end I get something that

looks something like this. This function can be

roughly described by this figure of merit where I set some non-zero intensity at yellow, so I want to maximize

my intensity there, and I set our blue equal to zero, so I want to minimize

my intensity there. So I put this into that method and outcomes and array of spheres. This array is about

a 150 micron by a 150 micron, and it looks pretty useless

to me on first look, but it looks like a more or

less random array of spheres, but so we tested and simulation, and we actually see that

at these focal planes, we get this nice little

spot that rotates around as we defocus the system. So now when you actually need

to make these structures it turns out that you can

actually make spheres. If you use Nanoscribe GT printer. So the Nanoscribe is a 3D printer that works using

two-photon lithography. So essentially, how it works

is you have some polymer. This polymer can be denatured by

light of a certain intensity. So what you do is you focus

your pulse laser beam, and at the areas where the pulse

laser beam is intense enough, you get polymerization and you

get something that sticks. Everywhere else, you can just wash off after you’ve done

exposing your city. So they can get these really cool relatively high-resolution

three-dimensional structures.>>[inaudible] material,

and this is the polymer.>>This is like a UV epoxy.>>Okay.>>It’s hard to define bandgap

for something like a polymer, but it is transparent

to visible light.>>Could you Modeled

the scene that we had before with

two or fewer forced years, on top of each other

or next to each other?>>Cylinders, it turns out that

that’s not a very good way because the spheres have near-field interactions that are not

present in cylinders.>>Okay.>>You can extend. So I will talk about how you can

extend this method into accepting different geometries, and it will with some modifications that can accept ellipsoids

are like cylinders. So we actually ended

up fabricating it. It looks similar on the top-down

view to what we actually wanted. But on the right, you can see that their spheres

aren’t quite spherical, but they look like layers of pancakes at different radii

stacked on top of each other, but we might as well tested, and so for reference here are

the simulation results again, and then here are

the experimental results. So we observe a higher noise floor and this device was designed

for 1550 nanometers. Part of the noise part

comes from dark counts from our camera which is not very good. In addition, there’s

very noticeable fabrication defects that we saw in the previous slide. But ignoring all of that, we do see that we have this very high-intensity focal spot that’s rotating around

in the same direction, and in the roughly the same locations and we can characterize this. So in simulation, we compare

the simulation spot location with the experiments up spot

location and we see besides the first envelope

last focal spot, they actually might quite

match up quite well, and most of them have an

error of under one micron, though the first spot has an error that’s a little bit above one micron. That’s couldn’t be accounted for by some finite amount of translational

error or some fitting error, we’re not fitting error because

I didn’t fit the peaks. So now, I want to go into some of the future work and

outlook, where I think that yeah.>>Can I ask a question? So the structure you had also have these connective lines

between the cedars right? If you go back a bit, the example you have that printed

in the bottom right.>>If you wanted a big array of spheres or if you want to three-dimensional

distribution of spheres, then they will need some support.>>Right.>>But in my case, I

didn’t need to do that.>>You didn’t.>>No, so in this case the spheres

are independent of each other. They’re just on a glass substrate.>>Are they spheres or phosphate?>>They are phosphate. So

this the top-right picture is an angled view and

it doesn’t really show how the photosphere physical

part but you can see it a little bit maybe. But yeah.>>When maybe it’s hard to get but

when looking from not top-down, I mean were you able to observe a greater turned out of irregularities and

the spheres because I mean, when you’re looking

top-down right you can.>>Yeah.>>This might be I’m

assuming that the printer is printing along with parallel to the substrate and layers

parallel to the substrate right?>>Yeah.>>When you’re looking

from the other perspective that’s maybe a cross section of this the spheres will look more like half

submerged cylinders may be.>>I guess that might be true, but what the printer does is it doesn’t actually

print any material. It selectively polymerizes right, so you have a focal spot that’s like a pixel that you scan

over your resist.>>Yeah.>>So I don’t see any reason why they would be less focal on the top and they

wouldn’t be on the bottom, because it’s just is

a focal spot that’s scanning. There might be some mechanical

issue with the resist.>>Suddenly, imagine it as

focusing layer by layer right?>>Yeah.>>Otherwise, you won’t

even get visibility into southern slots

and things like that. So it is a preferred direction

here in which it’s going. So I guess what you’re

saying is stopped down. You can see nice disks still, but sideways and you look at it, you might see

some other sheep issues.>>All right, that’s aspect

mechanical issues actually.>>So there might be

mechanical issues, but they have been groups that have been

actually able to make these kinds of structures. So like while there might be mechanical issues with larger

three-dimensional and raises fears, I wouldn’t actually expect

it with what I’m doing, just because they have

some really nice results, and this is a epoxy has basically SU-8 which

has relatively well-studied mechanical properties and

it is capable of producing very high aspect ratio like pictures.>>So you can bet your navel

spheres that are balanced on without having rural rounder?>>With some trial and error, yes. So there is, so when I

make these spirits are is like a little bit of a little bit of like a flat surface because

I sad to say a little bit.>>Skewed in the bottom here.>>Yeah.>>I have had these samples

like fly off when I like try to rinse them off with something and they’ll

just like float up.>>That’s okay. It’s

like 20 minutes of work. So it’s like not that bad. That’s another benefit.>>[inaudible].>>Yeah.>>Oh maybe was, yeah so these are actually

pretty big spheres right? These are visible light.>>This is. Okay say I forgot to mention this

is for infrared light.>>Infrared light.>>So the resolution

of the Nanoscribe is roughly 200 by 700 nanometers. That’s like your smallest

dimension or you could do. So the smallest spheres that you

could reliably make spherical, they quoted as around a 100 micron

or one micron in diameter. So we decided to use

infrared light because we couldn’t actually resolve

the small enough. The caveat this actually

isn’t a metasurface per se, because the periodicity is actually

greater than the wave length.>>This is like a period of wrath.>>Yeah.>>But the simulation method

doesn’t really change. It’s more just like the

fabrication that we forced us to make

these bigger structures.>>[inaudible] Term

disappeared refractive.>>Maxwell doesn’t

care whether you know.>>What labels deeply.>>Yeah.>>So outlook. So we can extend this method to arbitrary shapes using something called

the T-matrix method. This is the method that’s

actually developed mostly for use by astronomers

or astrophysicists, who want to study space dust

and aerosol particles. So this extension is something

that I’ve been working on, and we’ve actually been able

to implement it in the code. We don’t have any preliminary

results quite yet. We don’t have publishable

results quite yet. But we have shown that we

can increase the efficiency of lens with the

numerical aperture of around 0.83 from 20 percent to 26 percent using

these ellipsoidal scatterers. So in this case, it wasn’t

an inverse design method, we have some existing

lens design that we made, and then we optimized

it using this method. We were able to see a pretty significant efficiency

increase in my opinion.>>Now, in these types of designs, where is that 80 percent

of the light go? Is it just scattered uniformly

across the image field?>>So I don’t know anything about

ellipsoids there’s actually a lot of backscattering. So a lot of ellipsoids will

actually reflect light back at you, and that’s something that

I learned very recently, when I was trying to figure

out these intensities. So in this case, some of

the light is being backscattered, and some of this light isn’t being

focused it’s at a focal point, but it’s a pretty commonly

accepted I think problem for high numerical aperture

lenses to focus light with high efficiency just based

on like Cornell’s equations, as you light this incident

on larger angles, it starts being less efficient

based on the Theta term.>>I think what Brian

is implying is that, if you are scattering a light

into where your image is, random, or pseudorandom way, you’re lowering your contrast, therefore, your image quality.>>Yeah.>>So whereas, if it goes backwards, maybe that’s a different problem

but the efficiency, but maybe it doesn’t

affect you image quality.>>In this case, I think there would definitely be light that would be scattered into a random background. But I think that would be true of any refractive high NA lens as well.>>[inaudible] Since

they’re good at reflecting, gradual reflecting or reflecting, if you look at using these more as little mirrors and little lenses.>>That’s something that

[inaudible] that’s taking over this project

you guys interested in. That was not something

that I have thought of.>>Okay.>>So maybe if more than the feature like the idea of designer optics

because right now, the model for an optical

element is you go to a website like Sorlabs

or Edmond Optics, and you buy an optical

element off the shelf. If you want some A sphere that

can maybe make it for you, or if you want some special coding on your lens, they

can do that for you. But if you wanted

some really weird optical element, they probably wouldn’t

build to make it for you. I think that’s largely due

to manufacturing reasons. So metasurfaces are already compatible with these top

down lithography practices. So maybe there’s a new model

where you can actually have some design that you

give to some company, and then they can fabricate

this optical element for you. Now, you can have these custom

optical elements that are designed to work with your sensor instead of using

off-the-shelf components. Another really interesting

application is towards volume optics. So metasurfaces are

these 2D arrays of scatters, and we think about optics, we always think about

some 2D input plane as incident on some 2D surface, and that focuses on

some 2D output plane like a lens. But there’s no reason

that has to be the case, if we have some extended

optical element, we can think about

light entering from different ways like some cube. That’s something that has been

done with very low contrast glass. So what they do is they

focus a laser beam on glass, and then at high intensities, this glass can get

small refractive index changes on the order of 0.001 or so. But even with this really small

refractive index contrast, and this really weak scattering, they can show that they can actually multiplex

different functions, or different holograms in addition

to with respect to angle, and also with respect

to a wavelength. That’s something I’m

really interested in working with because I think inverse design really shines in this situation where we have to design some three-dimensional volume. In that case, I don’t think that it’s really practical to make

some forward design.>>Do you think that this type

of design would work well with the liquid crystal, the helical types of geometric phase

that dimension optics does?>>So the very phase optics?>>Yeah.>>In what context?>>So I mean, they have

this nice technology where they basically they have this helical LCs that come with geometric phase optics similar to metamaterial we currently use

but they’re actually higher, more efficient, meaning

slightly more mature technology. Can you apply your types of design

techniques to their materials?>>So these are helical scatters?>>These are helical liquid crystal, I can show you later if

you’re not familiar. You should definitely check it out.>>Those are corkscrew?>>Yeah. They are like

corkscrew liquid crystal. So they work with left and

right-handed circular polarization.>>Right.>>So that you get

the geometric phase a different way than you do with these thing room.>>So in this case, it’d be

polarization sensitive, right?>>Yes, very.>>Okay. I think it would be

possible to arrange them. You would have to find

some parameterization of this corkscrew structure, and that’s how, and then be able to, so if this corkscrew

structure has some height, it has some winding number, and it has some radius, and then if you weren’t able to express the surface in

terms of these parameters, you could probably

plug it into my code, and then optimize those parameters. I think it’s hard to

parameterize a helical surface.>>That’s not something that I

have very much experience with.>>It could be basically

selectively blast away.>>So they’re actually changing

rotation angle, right?>>I think they lost the way.>>How did they get

their different phases by changing the orientation

of their helix, the rotation of the helix.>>I think that the helix is, if you have these devices like that, the helix goes like this. So the axis of the helix is

parallel to your substrate.>>Because perpendiculars.>>Perpendicular.>>When we’re dealing with that like these right hand circular

polarized light, you can think of these helices, as essentially being like

polarization converters, and essentially it’s like

the very phase is like your phase changes tied to

the rotation of your polarization.>>Yeah.>>Of your light. So it makes sense for the axis is perpendicular

to your substrate.>>I think even if their axis

goes horizontal with the plane, if you look on the

vertical, you’ll still see another helix because if you think.>>It maybe.>>Yeah.>>I think they design these things with their further perpendicular. I think other people might

have done that for Morgan fret Street Prep design

what you’re talking about.>>I’ve another question. When you design, for example, like in the future you want to

design these 3D volume optics, you have to worry about

the fact that light as being multiply scattered within the optics [inaudible] model

that somehow, right?>>Yes. So that’s

really a good point. If you have these scatterers they

have to be coupled together, and that’s something

that meet there actually does for us very well is that it computes all these particle

particles coupling is analytically.>>[inaudible] helix spherical.>>Assuming spherical, but

the formalism is the same, if you have different scatters

at different geometries. The only thing you

can run into is that, with Mie theory, if you have

very closely packed particles. So if you have very

closely packed particles, there’s another extension

that you need to add because if your particle has some

circumscribing sphere, and that circumscribing sphere can intersect with the boundary

of another particle. And that is due to some singularities in the Bessel functions that

you use to expand your Bessel. You can fix that in different ways, but that is something you

have to be careful about. The last thing is some idea is that one of my advisor had is like the optics and computational

algorithm customization. So if we have some scene or you have some feature that we

were interested in, maybe we want to create an image of the scene or maybe you want

to make some decision. So we can think of having the computational design

of a metasurface or of a stack of metasurfaces together that perform some optical function. In this case, it can be

trying to image it in which case the metasurface

apply some blur, or it can be performing

some mathematical operations, some linear operation on this scene and having some

post-processing software afterwards. So this is like an imaging pipeline. But in practice, what

has been done is that these two elements of

this are DC components, the post-processing software and the computational design are

optimized independently. So one thing that we’re interested is essentially like

this co-optimization of these metasurfaces with

some optimization algorithm, and that’s something

that metasurfaces really make possible because now you really have a lot of control over the face profiles over

your scattering properties. Just an overview, I went over

some single element metasurfaces. I went over some metasurface

optical systems, some work on the inverse design of metasurfaces, and

some acknowledgments. I’m from the noise lab, the nano optoelectronic integrated

systems engineering lab, archives on the top left and then I want to thank some

of the collaborators. From left to right is

Taylor, Shane, Chris, James, and Max, and also some collaborators at

the Air Force Research Lab who helped us with

the inverse design project. Developers’ facilities and

some of the funding sources from our lab. Yeah.>>These kinds of metasurface lenses

seem to have problems with dispersion, efficiency,

angular dependence. What do you see as the future

prospect of addressing those issues. Do you think in the next 10 or 20 years we’ll see

a flat metasurface lens that can rival conventional

refractive lens or are these devices going to

be more specialized, like where you meet some exotic

control over the wave front, and these other issues

are not important like you’ve always know that light

is coming from a certain angle, you know the wavelength and so forth.>>Yeah, so that’s a good point is that it seems like there’s

a lot of problems right now. But one thing is that all of these problems have been solved

relatively independently. So we have achromatic operation that works for these small

numerical apertures. We have these angular

corrected lenses. I don’t see them in the near future replacing the lenses in

your smartphone, for example. But definitely in anything where you have interested

in a single wavelength, I think that these metasurfaces

are very interesting. So if you have some optical sensors for that

rely on a single wavelength. So that’s like autonomous cars

like Internet of things. That is a very good application for these so definitely

customized sensors. But if you were able to integrate

these metasurface into volumes, I think that is

one straightforward pathway to actually solving all of these problems that including

like angle aberrations, chromatic aberrations and efficiency. But maybe not efficiency problems. But efficiency problems are

lessened when you’re not forced to solve all these problems

in a single surface. I think it’s important to note that conventional optics doesn’t solve these problems that a

single element either, and it’s like if we want a really

high-performance optical system, we have to have a lot of

different optical elements in it, and it’s not a fundamentally

different problem that the metasurfaces have.>>It seems different in the sense that it would be hard

to stack these things.>>Yeah.>>Like the efficiency would go down as you stack

more and more of these. It goes down a little bit as

you stack more and more lenses, but it doesn’t go down a lot, right?>>It doesn’t go down that much. So one problem with efficiency is when we’re creating high

numerical aperture lenses, we have these very high

phase gradients and that causes some problems with efficiency, but like conventional lens, if you wanted to have

a small focal length, we could have one focal length lens of F1 and stack another

F2 on top of it, and that’s how we get our next

focal length without actually having to implement all of

these extra phase gradients, and that’s another way

that we can also use these dispersion

engineering techniques. Now if we have one corrected lens at F1 and other corrected lens at F2, we can make our next focal length, and that’s something that

would be interesting for them.>>So the LC PC line device like they’re over 99 percent efficient

for one handed polarization. What efficiency do

you think is possible with these types of not just yours but

everybody’s infrastructures?>>In a single layer, efficiencies of around 70-80 percent have been

shown for metasurface lenses.>>At one wavelength?>>At one wavelength. But it’s important to note that even these helical properties

are also diffracted lenses. If you have a 100 percent

efficiency at one polarization and

these applications where you have unpolarized light, you have that immediately. In addition, these elements

likely also display chromatic aberrations because

this geometric phase is also.>>Yes.>>So these are not like this

helical think solves everything, but it is interesting

that they could do this.>>I guess what I’m hoping is that, is there a path to

making these devices 99 percent efficient because then we can stack them and incorporate them.>>Yes, I really do think so.>>Okay. So what does

it take to get there?>>I think these conventional

metasurface elements have all been fabricated using clean rooms

that are university clean rooms. So I mean if you look at

the devices that we generally make.>>Just lack of

precision and features.>>I think that’s a big part of it. We experience a lot of

overetching and underetching. So I won’t bring up my nano scribe, but particularly in

the case of, where is it? Like this lens right here, you can see there’s obviously defects where dust

particles have come in.>>[inaudible] clean room. Okay.>>Yeah. I’m not very

good at fabrication. This is actually, it

wasn’t work that I was experimentally involved

with or fabricate these. But it’s like a big issue

with fabrication as our clean rooms are

not very reliable, and that’s like the catch-all

response that we always give, is that we overetch our pillars, our structures aren’t actually

what we want them to be. Another option is that we actually when we design these

using the forward design method, we don’t actually account for

the coupling between these pillars. So what we’ve done is we’ve

simulated these pillars in an infinitely periodic array

and then we take one out and we plug it in somewhere and it’s surrounded by maybe things that look similar or maybe

drastically not similar, like in this case.>>[inaudible] designing

it to say that the surrounding pillars should

coordinate in a certain way, so that light doesn’t scatter

around and wave directions.>>That’s something that inverse

design has actually shown that it’s capable of helping it because now if we can actually account for all these

couplings that happen, one of the things that these

authors in particular, the bottom, they complain about is that they are not

able to accurately characterize all of the couplings

between these pillars, and you can see that

they’re really dense, and it doesn’t really make sense

to consider them independent. So I think there’s also

a very big design factor that is important when you’re considering

how efficient these things are.>>So you’d have to do

a more rigorous inverse design, and you have to get

fabrication [inaudible] , then maybe you could get. So in stimulation, have you’ve

gotten close to 100 percent?>>In simulation, we’ve gotten

up to 80 or 90 percent?>>But that was simulations, not this stuff, right, [inaudible]

scattering simulation.>>That was in the

full-wave FTTD simulation.>>Of what kind of structure?>>Some lens.>>Okay.>>It’s not a very

highly recapture lens.>>But I’m asking was it exotic

shapes like these pillars here.>>No it’s just dumb

circular cylinders.>>Okay, that’s what I was asking.>>I could take his question

in different direction. I mean, he’s asking more about generalized imaging optics for multiple wavelengths

and stuff, but I mean, metasurfaces have

a few unique properties, right, where they’re definitely might be well-suited to

very specific applications. I mean, things that mean

something that’s very thin. There’s a man’s very light

or something where you could define an arbitrary optical wave

front, these sorts of things. So what applications you see there where there might be

more low hanging fruit for metasurfaces where

a traditional or optical system might not fit well for

what they have now.>>So one thing that

maybe is not a frame. So biological imaging is one thing that we’ve always

been very interested in. Is if we have these very thin

optical elements and we can implement we could put it

on some optical fiber.>>Okay.>>We can now actually

if we attach this, we can at least increase the

collection efficiency of our fiber, and maybe increases

numerical aperture. That thing is really

interesting for us. That’s pretty low lying fruit

and these kinds of sensors, there are already

a certain sensors that use these kinds of diffractive

optical elements. In that sense, maybe it’s not as plug and play or it’s

not as useful currently, but that’s something

that could be of use. There’s been a lot of interest in roll-to-roll printing of

metasurfaces recently, and then you could maybe see

some polymer-based metasurfaces, where if you could actually get the resolution required for these

roll to rolled methods to work, you could print out

these rolls of metasurfaces, and maybe on a solar panel to get

uniformity on your solar cell. Or in the case of these

gallium-arsenide solar cells, maybe a really high and

a metasurface that’s fairly efficient can focus light onto it one of these small

gallium-arsenide photocells. Metasurfaces have been sent

to space. That’s cool.>>Yeah.>>To do what?>>I’m not really sure,

it’s like [inaudible].>>Yeah.>>How long does

the inverse design process take? Is it parallelizable and what do you think the prospects are since we’re a bunch of

computer scientists here. A lot of us are computer scientists. What do you think the prospects

are for coming up with better optimization algorithms and

improving efficiency that way.>>So that’s a really

good question up. This particular simulation

took around one day on our workstation computer

and that’s like one of the new AMD 12-core processors, and it’s GPU accelerated or the matrix vector multiplication is GPU accelerated with some NVIDIA. I think it’s a Titan XP. So this took around

a day and that was rigorously simulating all

of these spheres together. One way to get this better, was actually something we’ve

been thinking about is, instead of simulating

an entire structure- so when we simulate

these entire structure together, we get the accurate result. But if we wanted to cut some corners, one thing that we can

consider doing is actually splitting this into

a different simulation regions. So we can simulate

these small sections of this, and that’s easily parallelizable. You can simulate

large sections of these. That actually reduces the time that it takes for

your iterative solver to->>That would very much be

like FMM methods, right?>>That’s multiple?>>Yes.>>Yeah.>>If this is 2D, roughly. This [inaudible] stand

beneath each other at all?>>No.>>No. So then that

is like a quad tree. You’re doing a quad tree like subdividing this into

four sections and so on.>>Yeah. So I think

you would actually want to have some overlapping, maybe some window that has

some kind of constant change.>>Yeah. Actually, FMM would do

full interactions between them. So you’re not isolating them at all.>>Okay.>>It’s just a way of organizing the compute so that you can

go in and multiple series. So the guy on the upper left corner, will have a low order

angular interaction, like you have a lower and lower

order angular resolution.>>Right.>>In the how you represent

the interaction between them.>>Do you ignore that interaction

or do you still-.>>Interaction is included, but it’s just that’s the central idea of FMM that you can actually

do in cross and interactions, but in a login and compute y respecting say double

precision or you specify the precision and it will do

it, respecting that precision. So I think you have your analytic solutions for

each sphere that you want to use, but you could import a FMM

like ideas into this. Because you’re also doing

single frequency and that’s natively where

FMM was designed.>>Right.>>So there’s definitely room to use those ideas for what you want.>>Right. I don’t

know much about FMM, but I do know that

the authors of the paper, of the Versailies,

were thinking about it and they were talking about

it in their GitHub chat. I was like, okay. Cool.>>Right. Because I’m imagining

if it’s taking a day, then a lot of your work is going into this dense matrix of interactions

between all of these things.>>Right.>>Which is why you are

proposing like target finding it and ignoring some introductions, but FMM will give you actually good results without

ignoring those introductions.>>Is FMM easily parallelizable?>>People have been working

on that for a while.>>Okay.>>It boils down to a sparse

matrix vector multiplication, but that’s when you use

BEM as the foundation. You start with a boundary

element formulation and then apply FMM ideas.>>Right.>>But you would start

with analytics solutions and then apply FMM ideas, but I think they still apply.>>Okay.>>But it’s all based

on multiple series of the baby equation and

all that business, which would still apply to you.>>This is all

non-convex optimization. So I’ve been using

L-BFGS just because it doesn’t give me oscillations

and that’s fine for me for now.>>What’s L-BFGS?>>It’s a quasi Newton method. It’s like you do a gradient descent which is just go in the direction

of your gradient. L-BFGS stores the history of the gradient descents and it

approximates your second derivative. So if you’re really close to

the minimum, you go faster. Also has some like,

what is it called? Trust region methods, so that your figure of merit never increases

if you’re trying to minimize. So it has like an adjustable step

size that it does automatically. There’s other update methods

that people have been thinking. I’ve used Nestor OBS gradient

for a little while. There’s some people that have

told us to use Stochastic method. One way that’s like

maybe a little far off, but I went to a D-Wave talk recently and they were

talking about optimization. I was like, hey, maybe I could

think about how D-Wave could help. That’s something I’ve been

interested in is like using D-Wave.>>[inaudible] right?>>Yeah.>>Sorry. D-Wave is

similar meanings, right?>>Yeah.>>Yeah.>>Quantum simulated.>>It’s a bit what

they using holographs. That’s what I was going to

ask is how nonlinear is this. So if you take one ball

out of this system, can you easily up substract its contribution

to your reconstruction, to your field, or

once you take a ball, you have to calculate

the whole thing again?>>So if I take a single sphere

out of this matrix, the system of equations that

I’m solving is changed, and I need to calculate again, yes.>>Well, there’s global optimization, that’s the philosophy anyway.>>Because we pull up and

everything is linear. So you can say, I’m

flipping one pixel.>>Yeah.>>So I’m just calculating

the contribution of that pixel.>>That’s what we were

saying earlier that the following modeling that’s

the built-in assumption. The idea of this approach to things

is not ignored and all that.>>This is still a linear system. It’s just highly coupled and

maybe if there was a way that you could resolve

the coupling somehow. That’s interesting. I never thought about it that way.>>Is it possible that you

can get something out of instead of like assuming these are

all on a transparent substrate?>>These are simulated in a space.>>Oh okay.>>Yeah.>>So they’re simulated in space. Well, I was wondering

is it possible that you could use hemispheres, metallic backplane and

use it for reflection?>>Right. So isn’t that the

same as what I’m doing.>>Well, it could be and it

could be easier to fabricate. It could be more efficient.>>Oh, okay. I see what

you’re saying. So I can’t actually simulate flat boundaries. This code doesn’t

actually support that.>>Or you could just say

it’s a perfect mirror.>>Yeah. I could

simulate something like a reflection of this pillar set. Yeah. That’s totally.>>I had a more general

question building on that. Have people looked

at like all this is full for refractive materials, have people looked at

reflective metal surface, are there uses for that like making a nice mirror that

solved the features?>>Yes.>>That is something cool for you.>>The original chromatic

aberration paper that use dispersion engineering, actually used reflective mirror, because that’s actually a really

easy way to double your phi. Because now usually light travels one way and it comes right back out. That’s one way that you could double the phase compensation and then

not have a very high thickness. There’s been other interesting ideas. One of the cool things

that’s come out recently is it’s a spin preserving mirror. So normally, when you have right-hand circular light

and it reflects off, it becomes left, but they have it, so they’ve made a mirror such that its right-hand circuit polarize

and it reflects back right, and it doesn’t reflect

left or something. I’m not sure about

what it does to left.>>Interesting. We’ve done that

with the [inaudible] LC materials.>>Okay.>>Or something similar I

didn’t have exactly that.>>So where is the emphasis

in the field now? Is it on better

computational methods? Is it on better materials? Where do people see the biggest

possible improvement coming from.>>So I think more recently, there’s been a lot of work towards

system level metal surfaces. So like these retro reflectors, these angle of compensating things. Right now that’s a harder problem

in these tunable systems. There is a significant push towards inverse design and that’s part

of the push that I’m part of. I don’t think that

many people are exploring computational imaging

paved metal surfaces, because there hasn’t been very much research that’s been done with these. But that’s very much

one of the emphasis. One of the major emphasis of our lab is that we

really want to pair this computational

intrepidity technique with these metal surfaces

and do this co-optimization. In addition, there’s a lot of interesting ideas

in non-linear optics. So these metal surfaces, if someone had a non-linear material, you can achieve

some phase-matching condition, and people have shown

that you can get some high non-linear enhancements

for these metal surfaces. There’s some really cool work

doing engineered disorder, but I think it’s more moving towards the system level

integrations, and like the vertical stacking and I think it’s actually

moving towards volume optics. But maybe slowly and in addition

to these inverse design methods.>>You’re graduating, so for example, I’m very interested in

collaboration, who should I talk to?>>Our commissioner.>>Okay.>>Or the other two people that

are very much at the top of the list are Andrei Faraon at

Caltech or Faraon at Caltech, and Federico Capasso at Harvard. If you’re interested in

the metallic surfaces which I didn’t talk about at all, Vladimir Shalaev at Purdue

is really interesting. Then there’s also Boltasseva,

Alexandra Boltasseva there. Naomi Halas at Duke does

some interesting plasmonic stuff. There’s this group in China

that recently made this. There’s the line universities on this list mostly in

Taiwan and Nanjing. So I guess this guy too Nanfang Yu, he’s in Columbia doing

this kind of stuff. I think he mostly works more on infrared optics right now

and not as much invisible.>>We’re going to try

to get her here later in the summer, so make sure. We’ll keep you posted on that. Okay. Well, it that’s it. There’s no more questions. Thank you very much, that was a very detailed and helpful for us to understand

a lot of these issues.>>Yeah. Thank you.>>Yeah.

Simple switch-case will be able to describe only one mobile movement, quite advanced…