Hello, I am a PhD Candidate at the Human Systems Laboratory in the MIT Department of Aeronautics and Astronautics. My research is tackling climate change with machine learning, little-by-little, together with Prof. Dava Newman, Cait Crawford, and Chris Hill.
I am concerned that running a high-resolution (1km) climate model can take multiple weeks on the world's largest supercomputers; consuming the same electricity a coal power plant would generate in one hour. To overcome the computational complexity, we are reshaping machine learning models into fast copies, or 'surrogates', of climate models. The core difficulty is to ensure trust and physical-consistency in the surrogates, such that policy- or decision-makers can trust the machine learning surrogates.
My research has won grants by NSF, Climatechange.ai, ESA, Portugal Space, NASA, IBM, Microsoft, NVIDIA, MIT Pkg, and MIT Legatum. I advised two teams of senior researchers at the NASA/SETI Frontier Development Lab, co-founded the ForestBench Consortium, interned with IBM Future of Climate and BRT (John Deere), earned an M.Sc. from MIT in safe and robust deep reinforcement learning, and a B.Sc. from TUM in Engineering Science. I also windsurf poorly, jam, joke, and love meeting new people - you are no expection - please don't hesitate to reach out. We have two Summer UROP opportunities in visualizing floods or superresolution of climate models.
Email: lutjens [at] mit [dot] edu
[Last updated Feb. 9th 2022]
Because climate models take so long to run, climate researchers cannot explore local impacts of uncertain or what-if climate policy scenarios quickly. Researchers in fluids or climate have long been accelerating the underlying partial differential equation (PDE) solvers. But, most approaches require too much domain knowledge to adapt the solver to each of the hundreds of equations in a climate model. We are proposing a seemingly radical approach: train machine learning (ML) models to directly map the climate scenario to local impact. While pure-ML models would indeed be too radical, we are developing algorithms to exploit knowledge from PDE solvers and enforce physical-constraints. On our journey, we have already developed a novel learning-based multiscale PDE solver, a fast uncertainty propagation algorithm, and a fast surrogate for coastal flooding (see Digital Twin Earth: Coasts).
Simulating coastal floods at high-resolution is computationally too expensive for real-time inference, uncertainty quantification, or low-income countries. We are leveraging neural operators, a new physics-infused deep learning technique, to learn a coastal flood model that is magnitudes faster.
As climate change increases the intensity of natural disasters, society needs better tools for climate risk communication. We are creating a "Google Earth of the future"; a global visualization of how climate change will shape our landscape. On our path, we proposed the first deep learning pipeline to ensure physical-consistency in synthetic satellite imagery and are publishing a dataset on 25k labelled satellite images of coastal floods and melting Arctic sea ice. Explore our results at trillium.tech/eie.
Small-holder forest conservation projects are often excluded from global carbon markets, because monitoring the forest carbon is too expensive or inaccurate. We create deep learning-based forest inventories from drone and smartphone imagery, collected by locals, to cheaply and accurately estimate the tree-sequestered carbon. The transparent carbon monitoring allows local landowners to participate in the global carbon markets and subsidize their livelihood through maintaining and rebuilding rainforest. Learn more at forestbench.org.
Deep learning-based models have recently outperformed state-of-the-art seasonal forecasting models, such as for predicting El Niñno-Southern Oscillation. However, current deep learning models are based on convolutional neural networks which are difficult to interpret and can fail to model large-scale atmospheric patterns. We propose the first application of graph neural networks to long range forecasting, because they can better capture spatially distant dependencies. We show that our model, Graphiño, outperforms state-of-the-art machine learning-based models for forecasts up to six months ahead and is more interpretable.
Click for paper .
Deep Neural networks, used in commercially available driver assistance systems, can fail on hardly detectable adversaries, for example, white stickers on the road. This work has invented an add-on, real-time defense algorithm to certify the robustness of Deep Reinforcement Learning algorithms to such adversaries. Patent pending.MIT News, Algorithm helps artificial intelligence systems dodge “adversarial” inputs
Deep neural networks can fail overconfidently on novel observations, for example, an uncollaborative pedestrian on a personal vehicle. This work pioneers a reinforcement learning framework that detects novel observations and cautiously avoids them by reasoning about the neural network's predictive confidence.
Instruction of robots through expert programming is expensive. This work creates a cheap instruction method by allowing non-technical users to control a robot in real-time through an afforable motion capture suit.