Virtual seminar series at the interface of energy, optimization and machine learning research
April 29, 2022
A large and rising share of carbon dioxide emissions is attributable to the electricity sector. The electrification of heat and transportation will have broad consequences for our energy systems and for the electricity sector. Improperly managed, massive electrification could destabilize power grids, cause blackouts and dramatically increase the cost of electricity. Carefully integrated, however, heat and transportation could represent a sorely needed flexibility resource as we turn towards generation sources that rely heavily on the wind and sun.
The research presented here focuses on building advanced computational tools for sustainable energy systems. These tools are developed and deployed in research-grade, end-to-end, prototype software systems for real-world energy and carbon management applications. Two currently active areas of interest that will be discussed in this talk are (1) building tools to track emissions in the US electricity system and (2) experimenting with building HVAC systems on the Stanford campus in the context of the COOLER Research Program. COOLER’s goal is to make large, modern buildings more energy-efficient, flexible, low carbon, and resilient using data, optimization, and control.
Jacques de Chalendar is a Visiting Scholar and an Adjunct Professor in the Energy Resources Engineering department at Stanford University. He was previously a doctoral candidate in the same department, advised by Profs. Sally Benson and Peter Glynn. He is also an Ingénieur Polytechnicien from the French Ecole Polytechnique (X2011).
His research focuses on building state-of-the-art computational tools for energy and carbon management problems. Two currently active projects include (1) building tools to track emissions in the US power system and (2) experimenting with building HVAC systems on the Stanford campus in the context of the COOLER Research Program. COOLER’s goal is to make large, modern buildings more energy-efficient, low carbon and resilient using data, optimization, and control.
May 13, 2022
With the push towards more sustainable electric power systems, renewable generation resources, which are usually connected via power electronics, increasingly replace power plants that are based on synchronous machines. This, on one hand, changes the interactions of the generation resources with the grid and has impacts on the system dynamics, but the power electronics also allow for, or even require, new ways of power system planning and operation. In this talk, we will provide examples of where optimization can be leveraged at different time scales to integrate increased levels of renewable generation taking into account their capabilities but also addressing the newly arising challenges.
Gabriela Hug (Senior Member, IEEE) was born in Baden, Switzerland. She received the M.Sc. degree in electrical engineering and the Ph.D. degree from the Swiss Federal Institute of Technology (ETH), Zürich, Switzerland, in 2004 and 2008, respectively. After the Ph.D. degree, she worked with the Special Studies Group of Hydro One, Toronto, ON, Canada, and from 2009 to 2015, she was an Assistant Professor with Carnegie Mellon University, Pittsburgh, PA, USA. She is currently a Professor with the Power Systems Laboratory, ETH Zürich. Her research is dedicated to control and optimization of electric power systems.
April 16, 2021 Video
Distribution grids are currently challenged by the rampant integration of distributed energy resources (DER). Scheduling DERs via an optimal power flow problem (OPF) in real time and at scale under uncertainty is a formidable task. Prompted by the success of deep neural networks (DNNs) in other fields, this talk presents two learning-based schemes for near-optimal DER operation. The first solution engages a DNN to predict the solutions of an OPF given the anticipated demands and renewable generation. Different from the generic learning setup, the training dataset here (namely past OPF solutions) features rich yet largely unexploited structure. To leverage prior information, we train a DNN to match not only the OPF solutions, but also their partial derivatives with respect to the input parameters of the OPF. Sensitivity analyses for non-convex and convexified renditions of the OPF show how such derivatives can be readily computed from the OPF solutions. The proposed sensitivity-informed DNN features sample efficiency improvements at a modest computational increase. Nonetheless, this two-stage OPF-then-learn approach may not be suitable for DER operation when the OPF input parameters are changing rapidly. To deal with such setups, we put forth an alternative OPF-and-learn scheme. Here the DNN is not trained to mimic the OPF minimizers but is rather organically integrated into a stochastic OPF formulation. A key advantage of this second DNN is that it can be driven by partial OPF inputs or proxies being the measurements the utility can collect from the grid in real time.
Vassilis Kekatos is an Assistant Professor with the power systems group in the Bradley Dept. of ECE at Virginia Tech. He obtained his Ph.D. from the Univ. of Patras, Greece in 2007. He is a recipient of the US National Science Foundation CAREER Award in 2018 and the Marie Curie Fellowship during 2009-2012. He has been a postdoctoral research associate with the ECE Dept. at the Univ. of Minnesota, and a visiting researcher with the Univ. of Texas at Austin and the Ohio State Univ. His current research focus is on optimization and learning for future energy systems. He is currently serving on the editorial board of the IEEE Trans. on Smart Grid.
April 30, 2021 Video
Optimization of electric power grids is a challenging, large-scale, non-convex problem. In order to optimize assets across these networks on fast operational timescales, the problem is typically simplified using linear models or other heuristics - resulting in increased cost of operation and potentially decreased reliability. Much work has been performed on improving these models through convexifications and other approximations, but here we take an alternate approach. We use the abundance of data from grid operators or generated offline to train machine learning models that can calculate optimal grid setpoints without even solving an optimization problem. By using an interesting application of a neural network and post-processing procedure, feasible, near-optimal solutions to the AC Optimal Power Flow problem can be obtained in milliseconds.
Dr. Kyri Baker received her B.S., M.S., and Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University in 2009, 2010, and 2014, respectively. From 2015 to 2017, she worked at the National Renewable Energy Laboratory. Since Fall 2017, she has been an Assistant Professor at the University of Colorado Boulder, and is a Fellow of the Renewable and Sustainable Energy Institute (RASEI). She received the NSF CAREER award in 2021. She develops computationally efficient optimization and learning algorithms for energy systems ranging from building-level assets to transmission grids.
May 14, 2021 Video
Integration of variable renewable and distributed energy resources in the grid makes demand and supply conditions uncertain. In this talk, we explore customized algorithm to tackle risk-sensitive electricity market clearing problems, where power delivery risk is modeled via the conditional value at risk (CVaR) measure. The market clearing formulations are such that they allow a system operator to effectively explore the cost-reliability tradeoff. We discuss algorithmic architectures and their convergence properties to solve these risk-sensitive optimization problems at scale. The first half of this talk will focus on a CVaR-sensitive optimization problem that can be cast as a large linear program. For this problem, we propose and analyze an algorithm that shares parallels and differences with Benders decomposition. The second half of this talk will focus on another CVaR-sensitive problem for which we propose and analyze sample complexity of a stochastic primal-dual algorithm.
Subhonmesh Bose is an Assistant Professor in the Department of Electrical and Computer Engineering at UIUC. His research focuses on facilitating the integration of renewable and distributed energy resources into the grid edge, leveraging tools from optimization, control and game theory. Before joining UIUC, he was a postdoctoral fellow at the Atkinson Center for Sustainability at Cornell University. Prior to that, he received his MS and Ph.D. degrees from Caltech in 2012 and 2014, respectively. He received the NSF CAREER Award in 2021. He has been the co-recipient of best paper awards at the IEEE Power and Energy Society General Meetings in 2013 and 2019. His research projects have been supported by grants from NSF, PSERC, Siebel Energy Institute and C3.ai, among others.
May 28, 2021 Video
For seven years we have studied stranded power (curtailment and negative pricing), a large (TWh) and rapidly growing phenomena (>30% CAGR), characterizing its temporal and spatial structure in power grids with growing renewable generation. Our goal is to exploit this power for productive use, both reducing the associated carbon-emissions for that use (e.g. cloud computing), and increasing grid renewable absorption. Our work shows that dispatchable computing loads that exploit stranded power can improve renewable absorption significantly. High-performance and cloud computing workloads can be supported with high quality-of-service and superior economics (total-cost-of-ownership) can be achieved despite intermittent power. Such loads can now exceed 1 gigawatt per site. In 2021, variations of these ideas are achieving acceptance and early deployment.
The rise of adaptive loads optimized for selfish purpose raises significant questions as to acceptable and constructive behavior for both grid stability and operations (e.g. social welfare and decarbonization). We have explored the impact of adaptive cloud compute load on power prices and carbon emissions, varying coupling model (none, selfish optimization, and grid optimization). The results show that computing adaptive loads can disturb grid dispatch and both increase carbon emissions and inflict financial harm on other customers. At today’s cloud-computing penetration levels, such effects would be significant and increasing.
More generally, these studies expose fundamental challenges with large-scale adaptive load to make it a pillar of increasing renewable absorption. If possible, we will discuss major economic and operational obstacles to creating adaptive load (flexibility) and effective coupling interfaces (e.g. service and markets). These challenges underpin an emerging “struggle for control” between competing spheres of optimization; directly, the tensions between societal aspirations for grid decarbonization and acceptable power cost.
Andrew A. Chien is the William Eckhardt Professor at the University of Chicago, Director of the CERES Center for Unstoppable Computing, and a Senior Scientist at Argonne National Laboratory. Since 2017, he has served as Editor-in-Chief of the Communications of the ACM, and he currently serves on the National Science Foundation CISE Advisory Committee. Chien leads the Zero-carbon Cloud project, exploring synergies between cloud computing and the renewable power grid. Chien is a global research leader in parallel computing, computer architecture, clusters, and cloud computing, and has received numerous awards for his research. Dr. Chien served as Vice President of Research at Intel Corporation from 2005-2010, leading long-range and “disruptive technologies” research as well as worldwide academic and government partnership. From 1998-2005, he was the SAIC Chair Professor at UCSD, and prior to that a Professor at the University of Illinois. Dr. Chien is an ACM, IEEE, and AAAS Fellow, and earned his PhD, MS, and BS from the Massachusetts Institute of Technology.
October 1, 2021 Video
The talk focuses on the synthesis of data-based online algorithms to optimize the behavior of networked systems – with a specific focus on power grids – operating in uncertain, dynamically changing environments, and with human-in-the-loop components. Formalizing the optimization objectives through a time-varying optimization problem, the talk considers the design of online algorithms with the following features: i) concurrent learning modules are utilized to learn the cost function on-the-fly (using infrequent functional evaluations, and leveraging Gaussian Processes), and ii) algorithms are implemented in closed-loop with the plant to bypass the need for an accurate knowledge of the algebraic map of the system. Leveraging contraction arguments, the performance of the online algorithm is analyzed in terms of tracking of an optimal solution trajectory implicitly defined by the time-varying optimization problem; in particular, results in terms of convergence in expectation and in high-probability are presented, with the latter derived by adopting a sub-Weibull assumption on measurement errors and the error in the reconstruction of the cost. Additional results are offered in terms of cumulative fixed-point residual. The online algorithm is applied to solve real-time demand response problems in power grids, where one would like to strike a balance between maximizing the profit for providing grid services and minimizing the (unknown) discomfort function of the end-users. Additional examples include real-time optimal power flow applications.
Emiliano Dall’Anese is an Assistant Professor in the Department of Electrical, Computer, and Energy Engineering at the University of Colorado Boulder, where he is also an affiliate faculty with the Department of Applied Mathematics. He received the Ph.D. degree from the Department of Information Engineering, University of Padova, Italy, in 2011. His research interests span the areas of optimization, control, and learning, with current emphasis on online optimization and optimization of dynamical systems; tools and methods are applied to problems in energy, transportation, and healthcare. He received the National Science Foundation CAREER Award in 2020, and the IEEE PES Prize Paper Award in 2021.
October 15, 2021 Video
A rising challenge in control of large-scale control systems such as the energy and the transportation networks is to address autonomous decision making of interacting agents. In this setting, a Nash equilibrium is a stable solution outcome in the sense that no agent finds it profitable to unilaterally deviate from her decision. Due to geographic distance, privacy concerns or simply the scale of these systems, each agent can only base her decision on local measurements. Hence, a fundamental question is: do agents learn to play a Nash equilibrium strategy based only on local information? I will motivate this question with problems arising in energy markets and discuss my work on addressing it for two classes of games, those with continuous and finite action spaces. I will present our algorithms for learning Nash equilibria in these games, discuss their convergence and show their applications in in energy markets. This is a joint work with my colleague T. Tatarenko, my former and current doctoral students O. Karaca and P.G. Sessa.
Maryam Kamgarpour holds a Doctor of Philosophy in Engineering from the University of California, Berkeley and a Bachelor of Applied Science from University of Waterloo, Canada. Her research is on safe decision-making and control under uncertainty, game theory and mechanism design, mixed integer and stochastic optimization and control. Her theoretical research is motivated by control challenges arising in intelligent transportation networks, robotics, power grid systems, financial markets and healthcare. She is the recipient of NASA High Potential Individual Award, NASA Excellence in Publication Award, the European Union (ERC) Starting Grant and NSERC Discovery Accelerator Grant. She was with the faculty of Electrical Engineering & Information Technology of ETH Zurich from 2016-2019, with the faculty of Electrical and Computer Engineering of UBC from 2020-202, and seem to have finally found her home in the faculty of Mechanical engineering at EPFL since October 2021.
October 29, 2021 Video
As the complexity and uncertainty of modern energy systems grow, we are seeing a growing need for good quality data for improving system and market operations. Traditionally, data in energy operations has largely been regarded as a free and highly accessible commodity, which is in significant contrast to the rising concerns about data privacy. Therefore, recent academic research has been investigating various forms of data markets to incentivize data exchanges.
In this talk, I will be presenting our recent work on a specific type of data market that is based on linear regression frameworks. The chosen use case is wind power forecasting, in which wind agents have the potential to improve their forecasts through gaining access to measurement data from neighboring wind agents. Based on how data payments are determined, I will introduce two market clearing mechanisms: 1) a cooperative game theoretic profit allocation, and 2) a lasso (least absolute shrinkage and selection operator) payment scheme. A comparison of the two payment mechanisms will be presented to show the difference in their preserved market properties and in their levels of incentive for the data sellers and data buyers.
Joint work with Pierre Pinson and Jalal Kazempour.
Liyang Han is a postdoc researcher at the Energy Analytics and Markets group at the Technical University of Denmark. His current research focuses on the economics of data with applications in renewable energy sources forecasting and other energy applications. Prior to coming to DTU, he received his PhD from the University of Oxford with a thesis focusing on the application of cooperative game theoretic frameworks in prosumer-centric energy markets. He is interested in expanding his research on data markets to broader applications with metrics to measure the feasibility and fairness of such markets.
November 1, 2021 Video
This talk discusses an electricity market clearing formulation that seeks to remunerate spatio-temporal, load-shifting flexibility provided by data centers (DaCes). Load-shifting flexibility is a key asset for power grid operators as they aim to integrate larger amounts of intermittent renewable power and to decarbonize the grid. Central to our study is the concept of virtual links, which provide non- physical pathways that can be used by DaCes to shift power loads (by shifting computing loads) across space and time. We use virtual links to show that the clearing formulation treats DaCes as prosumers that simultaneously request load and provide a load-shifting flexibility service. Our analysis also reveals that DaCes are remunerated for the provision of load-shifting flexibility based on nodal price differences (across space and time). We also show that DaCe flexibility helps relieve space-time price volatility and show that the clearing formulation satisfies fundamental economic properties that are expected from coordinated markets (e.g., provides a competitive equilibrium and achieves revenue adequacy and cost recovery). The concepts presented are applicable to other key market players that can offer space-time shifting flexibility such as distributed manufacturing facilities and storage systems. Case studies are presented to demonstrate these properties.
Victor M. Zavala is the Baldovin-DaPra Professor in the Department of Chemical and Biological Engineering at the University of Wisconsin-Madison and a computational mathematician in the Mathematics and Computer Science Division at Argonne National Laboratory. He holds a B.Sc. degree from Universidad Iberoamericana and a Ph.D. degree from Carnegie Mellon University, both in chemical engineering. He is on the editorial board of the Journal of Process Control, Mathematical Programming Computation, and Computers & Chemical engineering. He is a recipient of NSF and DOE Early Career awards and of the Presidential Early Career Award for Scientists and Engineers (PECASE). His research interests include statistics, control, and optimization and applications to energy and environmental systems.
November 12, 2021 Video
The growing climate impact of increased Greenhouse Gas Emissions and CO2 levels in Earth’s atmosphere highlights the value and importance of technologies that reduce such impact. Electricity generation is a significant contributor to global CO2 emissions, and the datacenter industry is expected to reach anywhere from 3 to 13% of global electricity demand by 2030. Datacenters have the potential to facilitate grid decarbonization in a manner different from isolated power loads, and Google has set on a mission to increase and scale carbon awareness via new technology solutions. We present Google’s globally-distributed Carbon-Intelligent Computing System, which harnesses datacenter load flexibility, not only to reduce emissions, but also to contribute to more robust, resilient, and cost-efficient decarbonization of the grid through energy system integration. We share some conclusions derived from operational data and discuss the key technical challenges that drive carbon-aware designs and their future enhancements. By calling out opportunities for “greening” the cloud industry, we hope to inspire academia and industry to pursue diverse and system-specific carbon-aware solutions.
Ana Radovanovic has been a research scientist at Google since early 2008, after she earned her PhD Degree in Electrical Engineering from Columbia University (2005) and worked for 3 years as a Research Staff Member in the Mathematical Sciences Department at IBM TJ Watson Research Center. For the last 8 years, Ana Radovanovic has focused all her research efforts at Google on building innovative technologies and business models with two goals in mind: (i) to deliver more reliable, affordable and clean electricity to everyone in the world, and (ii) to help Google become a thought leader in decarbonizing the electricity grid. Nowadays, Ana is widely recognized as a technical lead and research entrepreneur. She is a Senior Staff Research Scientist, serving as a Technical Lead for Energy Analytics and Carbon Aware Computing at Google.
November 19, 2021 Video
Recent years have witnessed rapid transformations of contemporary advances in machine learning (ML) and data science to aid the transition of energy systems into a truly sustainable, resilient, and distributed infrastructure. A blind application of the latest-and-greatest ML algorithms to solve stylized grid operation problems, however, may fail to recognize the underlying physics models or safety constraint requirements. This talk will introduce two examples of bridging physics-aware and risk-aware ML advances into efficient and reliable grid operations. First, we develop a topology-aware approach using graph neural networks (GNNs) to predict the price and line congestion as the outputs of real-time AC optimal power flow problem. Building upon the underlying relation between prices and topology, this proposed solution significantly reduces the model complexity of existing end-to-end ML methods while efficiently adapting to varying grid topology. Second, we put forth a risk-aware ML method to ensure the safety guarantees of data-driven, scalable reactive power dispatch policies in distribution grids. An adaptive sampling method is leveraged to improve the gradient computation of conditional variance-at-risk (Cvar) and attain guaranteed voltage violation performance. Our proposed algorithms have demonstrated the advantages of incorporating power system modeling into the design of learning-enabled power system operations.
Hao Zhu is currently an Assistant Professor of Electrical and Computer Engineering (ECE) at The University of Texas at Austin. She received the B.S. degree from Tsinghua University in 2006, and the M.Sc. and Ph.D. degrees from the University of Minnesota in 2009 and 2012, all in electrical engineering. From 2012 to 2017, she was a Postdoctoral Research Associate and then an Assistant Professor of ECE at the University of Illinois at Urbana-Champaign. Her research focuses on developing innovative algorithmic solutions for learning and optimization problems in future energy systems. Her current research interests include physics-aware and risk-aware machine learning for power systems, and the design of energy management system under the cyber-physical coupling. Dr. Zhu is a recipient of the NSF CAREER Award (2017), an invited attendee of the US NAE Frontier of Engineering Symposium (2020), as well as the faculty advisor for three Best Student Papers awarded at the North American Power Symposium. She is currently a member of the IEEE Power & Energy Society (PES) Long Range Planning (LRP) Committee and an Associate Editor for the IEEE Transactions on Smart Grid.
December 3, 2021 Video
We formulate a generic network game as a generalized Nash equilibrium problem. Relying on normalized Nash equilibrium as solution concept, we provide a parametrized proximal algorithm to span many equilibrium points. Complexifying the setting, we consider an information structure in which the agents in the network can withhold some local information from sensitive data, resulting in private coupling constraints. The convergence of the algorithm and deviations in the players’ strategies at equilibrium are formally analyzed. In addition, duality theory extension enables to use the algorithm to coordinate the agents through a fully distributed pricing mechanism, on one specific equilibrium with desirable properties at the system level (economic efficiency, fairness, etc.). To that purpose, the game is recast as a hierarchical decomposition problem, and a procedure is detailed to compute the equilibrium that minimizes a secondary cost function capturing system level properties. An application is presented to a peer-to-peer energy trading problem, under complete and incomplete information. Under incomplete information, assuming that the agents anticipate the form of the market equilibrium, an aggregative noncooperative stochastic game is formulated to model the communication mechanism between the agents on top of the peer-to-peer energy trading game. Analytical and numerical analysis are provided to capture the impact of privacy constraints on the generalized Nash equilibria.
Hélène Le Cadre received a Ph.D. degree in applied mathematics, with focus on network game theory for communication networks. She worked as an assistant professor at Ecole des Mines de Paris, at Sophia-Antipolis, France, and was a visiting researcher at the center for operations research and econometrics at University catholique of Louvain, in Belgium. She spent 5 years abroad in the Flemish part of Belgium, specializing in operations research techniques and markets. Currently, she is working as a permanent researcher at Inria Lille-Nord Europe, in France. Her research interests revolve around (algorithmic) game theory, optimization, machine learning and forecasting, with applications in energy, telecommunication networks and economics.
December 10, 2021 Video
The wide-scale electrification of the transportation sector won’t be possible without careful planning and coordination with the power grid and the companies that manage its operation. If left unmanaged, the uncoordinated charging of EVs at increased levels of penetration will amplify existing peak loads, potentially outstripping the grid’s current capacity to meet demand. In this talk, I’ll present findings from the OptimizEV Project—a real-world pilot study in Upstate New York exploring a novel approach to coordinated residential EV charging. As one of its primary objectives, the OptimizEV platform seeks to harness the latent flexibility in EV charging by offering EV owners monetary incentives to delay the time required to charge their EVs. Each time an EV owner initiates a charging session, they can use their smartphone to specify how long they intend to leave their vehicle plugged in by selecting from a “menu of deadlines” that offers lower electricity prices the longer they’re willing to delay the time required to charge their car. Given a collection of active charging requests, a smart charging system dynamically optimizes the power being drawn by every EV in real time to minimize their collective strain on the grid, while ensuring that every customer’s car is fully charged by its deadline. Customers get their energy when they need it by and the smart charging system can optimally coordinate the delivery of that energy to avoid spikes in demand. I’ll describe important lessons learned from the OptimizEV Project related to customer behavior, EV charging characteristics, and EV transportation patterns.
This is joint work with Polina Alexeenko and New York State Electric and Gas (NYSEG).
Eilyan Bitar is an Associate Professor in the School of Electrical and Computer Engineering at Cornell University. His current research is focused on the optimization, control, and economics of sustainable electric power and transportation systems. He received his BS (2006) and PhD (2011) from UC Berkeley. Prior to joining Cornell in 2012, he spent one year as a Postdoctoral Fellow at the California Institute of Technology and UC Berkeley. He is a recipient of the NSF Faculty Early Career Development Award (CAREER), the David D. Croll Sesquicentennial Faculty Fellowship, the John and Janet McMurtry Fellowship, the John G. Maurer Fellowship, and the Robert F. Steidel Jr. Fellowship.
December 17, 2021 Video
We consider the combination of a network design and graph partitioning model in a multilevel framework for determining the optimal network expansion and the optimal zonal configuration of zonal pricing electricity markets. The two classical discrete optimization problems of network design and graph partitioning together with nonlinearities due to economic modeling yield extremely challenging mixed-integer nonlinear multilevel models for which we develop two problem-tailored solution techniques. The first approach relies on an equivalent bilevel formulation and a standard KKT transformation thereof including novel primal-dual bound tightening techniques, whereas the second is a tailored Benders-like decomposition approach. We prove for both methods that they yield global optimal solutions. Afterward, we compare the approaches in a numerical study and show that the tailored Benders approach clearly outperforms the standard KKT transformation. Finally, we present some case study that illustrates the economic effects that are captured in our model and discuss some recent results for the Germany electricity sector.
March 11, 2022 Video
The water and power networks are heavily interdependent. The water network requires power for extraction, treatment, and distribution processes. The power network requires water for thermoelectric generation, i.e., cooling and emissions scrubbing, as well as fuel extraction and processing. By and large, these networks are operated independently, but there is a growing body of research beginning to demonstrate the benefits of operating these networks jointly. In this talk, I will describe our research to develop operational planning and real-time control strategies for coupled water-power distribution networks under uncertainty. Specifically, we formulate chance constrained and robust optimization problems that simultaneously determine water pumping schedules along with the parameters of real-time control policies that can be used to respond to water and power demand forecast errors. We show how the water network can help the power network manage voltage deviations resulting from intermittent solar PV production and provide frequency regulation to the bulk power network, while satisfying the water demand and the physical constraints of both networks. This is joint work with Anna Stuhlmacher.
Johanna Mathieu is an associate professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. Prior to joining Michigan, she was a postdoctoral researcher at ETH Zurich, Switzerland. She received her PhD and MS from the University of California at Berkeley and her SB in Ocean Engineering from MIT in 2004. She is the recipient of an NSF CAREER Award, the Ernest and Bettine Kuh Distinguished Faculty Award, and the UM Henry Russel Award. Her research focuses on ways to reduce the environmental impact, cost, and inefficiency of electric power systems via new operational and control strategies. She is particularly interested in developing new methods to actively engage distributed flexible resources such as energy storage, electric loads, and distributed renewable resources in power system operation, which are especially important in power systems with high penetrations of intermittent renewable energy resources such as wind and solar.
March 18, 2022 Video
Integration of volatile and uncertain renewable energy resources and synergistic energy storage and demand response technologies motivates the pursuit of stochastic electricity market designs to accommodate these resources and technologies efficiently and reliably. However, until recently the primary means of achieving these goals rested on the use of traditional optimization techniques and, in particular, on stochastic optimization and its proxies. However, recent developments make it possible to “re-invent” stochastic electricity markets borrowing results from data science, machine learning and financial engineering. This presentation will first describe how stressing of load and renewable time series can be achieved using principal component analysis and generative adversarial networks, and then how these stressed time series could be integrated into a stochastic electricity market design. Building on this result, we will describe a machine learning approach to stochastic-market clearing that affords both market- and operation-feasible solutions and reduces computational requirements.
Yury Dvorkin (he/him/his) is an Assistant Professor and Goddard Junior Faculty Fellow in the Department of Electrical and Computer Engineering at New York University’s Tandon School of Engineering with an affiliated appointment at NYU’s Center for Urban Science and Progress. Before joining NYU, he was a Ph.D. student (class of 2016) at the University of Washington, under the supervision of Prof. Daniel S. Kirschen, and a graduate student researcher (2014) at the Center for Nonlinear Studies, Los Alamos National Laboratory. For his dissertation work, entitled “Operations and Planning in Sustainable Power Systems“, Yury was awarded the inaugural 2016 Scientific Achievement Award by Clean Energy Institute (University of Washington). In 2019, Yury received the NSF CAREER Award to investigate small-scale electricity trading. Later this year, Yury received the Goddard Junior Faculty Fellowship. Also, Yury is a dedicated reviewer, who was awarded with the Best Reviewer Award from IEEE Transactions on Power Systems (2014, 2015, 2016, 2017, 2018), IEEE Transactions on Sustainable Energy (2014, 2015, 2016), and IEEE Transactions on Smart Grids (2016). Since 2019, Yury has been an Associate Editor of the IEEE Transactions on Smart Grid.
Yury’s research focuses on developing modeling and algorithmic solutions to assist society in accommodating emerging smart grid technologies (e.g., intermittent generation, demand response, storage, smart appliances, cyber infrastructure) using multi-disciplinary methods in engineering, operations research, economics, and policy analysis. His current research is funded by the National Science Foundation, US Department of Energy, US Department of Transportation, Advanced Research Projects-Energy, Electric Power Research Institute, New York State Energy Research and Development Authority, and Alfred P. Sloan Foundation.
April 1, 2022 Video
The extensive monitoring of the power grid, together with the proliferation of technologies for intelligent metering, control and computations, is allowing us to record, and possibly exploit, the context under which power grids are operated. In this talk, we will show examples of how ideas from the fields of machine learning, optimization and decision-making can be put to work together to develop context-driven methods that improve the solution to challenging problems in power system operations. We will discuss a variety of context-driven methods of different complexity, some of which are surprising for their apparent simplicity but remarkable benefits in terms of computational costs and/or social welfare, proving the huge potential value in the contextual data still to be unleashed.
Juan M. Morales is currently the head of the research group OASYS ― Optimization and Analytics for Sustainable Energy Systems (oasys.uma.es) at the University of Málaga in Spain, where he also holds a tenured associate professorship position in the Department of Applied Mathematics. Juan M. Morales received his M.Sc. degree in Industrial Engineering from the University of Málaga and his Ph.D. in Electrical Engineering from the University of Castilla – La Mancha, Spain, in 2006 and 2010, respectively. In 2011, he was awarded a Hans Christian Ørsted research fellowship by the Technical University of Denmark, where he was also an associate professor in Stochastic Optimization in Energy Systems within the Department of Applied Mathematics and Computer Science, until 2016. Juan M. Morales’s expertise lies in the fields of Data Analytics and Optimization, with particular focus on their applications to Energy Engineering and Economics, to which he has contributed a number of technical publications, including two monographs on the challenges of a fossil-free energy sector. He is a recipient of a Starting Grant awarded by the European Research Council for his project “Advanced Analytics to Empower the Small Flexible Consumers of Electricity,” a Senior Member of IEEE, and a current member of the editorial boards of IEEE Transactions on Power Systems and the Springer journal TOP (the official journal of the Spanish Society of Statistics and Operations Research).
April 15, 2022 Video
Making use of modern black-box AI tools is potentially transformational for online optimization and control. However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety. So, while their performance may improve upon traditional approaches in “typical” cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift or unrepresentative training data. This represents a significant drawback when considering the use of AI tools for energy systems and autonomous cities, which are safety-critical. A challenging open question is thus: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications? In this talk, I will introduce recent work that aims to develop algorithms that make use of black-box AI tools to provide good performance in the typical case while integrating the “untrusted advice” from these algorithms into traditional algorithms to ensure formal worst-case guarantees. Specifically, we will discuss the use of black-box untrusted advice in the context of online convex body chasing, online non-convex optimization, and linear quadratic control, identifying both novel algorithms and fundamental limits in each case.
Adam Wierman is a Professor in the Department of Computing and Mathematical Sciences at Caltech. He received his Ph.D., M.Sc., and B.Sc. in Computer Science from Carnegie Mellon University and has been a faculty at Caltech since 2007. Adam’s research strives to make the networked systems that govern our world sustainable and resilient. He is best known for his work spearheading the design of algorithms for sustainable data centers and his co-authored book on “The Fundamentals of Heavy-tails”. He is a recipient of multiple awards, including the ACM Sigmetrics Rising Star award, the ACM Sigmetrics Test of Time award, the IEEE Communications Society William R. Bennett Prize, multiple teaching awards, and is a co-author of papers that have received “best paper” awards at a wide variety of conferences across computer science, power engineering, and operations research.