INESC-ID   Instituto de Engenharia de Sistemas e Computadores Investigação e Desenvolvimento em Lisboa
technology from seed Inesc-ID Lisboa
   
 
 
 
>
 
 
 
 
 
 

List All Distinguished Lectures

 
 
     
     
 

Moshe Y. Vardi

Rice University

An Ethical Crisis in Computing

3 December 2020

17:00, https://videoconf-colibri.zoom.us/j/949837545

Abstract

>Computer scientists think often of "Ender's Game" these days. In this award-winning 1985 science-fiction novel by Orson Scott Card, Ender is being trained at Battle School, an institution designed to make young children into military commanders against an unspecified enemy. Ender's team engages in a series of computer-simulated battles, eventually destroying the enemy's planet, only to learn then that the battles were very real and a real planet has been destroyed. Many of us got involved in computing because programming was fun. The benefits of computing seemed intuitive to us. We truly believe that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Like Ender, however, we realized recently that computing is not a game–it is real–and it brings with it not only societal benefits, but also significant societal costs, such as labor polarization, disinformation, and smart-phone addiction. The common reaction to this crisis is to label it as an "ethical crisis" and the proposed response is to add courses in ethics to the academic computing curriculum. I will argue that the ethical lense is too narrow. The real issue is how to deal with technology's impact on society. Technology is driving the future, but who is doing the steering?

Bio

>Moshe Y. Vardi is University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. He is the recipient of several awards, including the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 650 papers, as well as two books. He is a fellow of several societies, and a member of several academies, including the US National Academy of Engineering and National Academy of Science. He holds seven honorary doctorates. He is a Senior Editor of the Communications of the ACM, the premier publication in computing.

Host

>Joaquim Armando Pires Jorge

 
     
     
 

James Larus

School of Computer and Communication Sciences (IC) at EPFL (École Polytechnique Fédérale de Lausanne)''''

Programming Non-Volatile Memory

15 October 2019

14:30, IST Alameda - Anfiteatro Abreu Faro

Abstract

>New memory technologies are changing the computer systems landscape. Motivated by the power limitations of DRAM, new, non-volatile memory (NVM) technologies — such as ReRAM, PCM, and STT-RAM — are likely to be widely deployed in server and commodity computers in the near future. These memories erase the classical dichotomy between slow, non-volatile disks or SSDs and fast, volatile memory, greatly expanding the possible uses of durability mechanisms. Taking advantage of non-volatility is not as simple as just writing data to NVM. Without programming support, it is challenging to write correct, efficient code that permits recovery after a power failure since the restart mechanism must find a consistent state in the durable storage. This problem is well-known in the database community, and a significant portion of a DB system is devoted to ensuring recoverability after failures.

Bio

>James Larus is Professor and Dean of the School of Computer and Communication Sciences (IC) at EPFL (École Polytechnique Fédérale de Lausanne). Prior to joining IC in October 2013, Larus was a researcher, manager, and director in Microsoft Research for over 16 years and an assistant and associate professor in the Computer Sciences Department at the University of Wisconsin, Madison.

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Professor Alan V. Oppenheim

Massachusetts Institute of Technology (MIT)

HOW I THINK ABOUT RESEARCH

24 September 2019

11:00, Centro de Congressos do IST

Abstract

>In the context of our roles in mentoring doctoral students, there are many ways of finding and formulating research problems and ideas. My own approach over many decades has been to focus on a style that gives students the experience as much as possible of an initially unstructured intellectual adventure with a safety net underneath. I like to describe the style as : "Having fun, chasing interesting ideas, which lead to solutions, in search of problems.” In this talk I will say a little more about this style and illustrate it with a few examples. In the examples, the focus is not on the details of the solution, but on how the topic originated and where it led to in terms of potential practical applications.

Bio

>Professor Alan V. Oppenheim is a Principal Investigator in the Research Laboratory of Electronics (RLE) and Ford Professor of Engineering at the Massachusetts Institute of Technology (MIT). He received the S.B. and S.M. degrees in 1961 and the Sc.D. degree in 1964, all in Electrical Engineering, from the Massachusetts Institute of Technology. He is also the recipient of an honorary doctorate from Tel Aviv University. During his career he has been closely affiliated with MIT Lincoln Laboratory and with the Woods Hole Oceanographic Institution. His research interests are in the general area of signal processing algorithms, systems and applications. He is coauthor of the widely used textbooks Digital Signal Processing, Discrete-Time Signal Processing, (currently in its third edition) Signals and Systems, (currently in its second edition), and most recently Signals, Systems & Interference published in 2016. He is also editor of several advanced books on signal processing. Throughout his career he has published extensively in research journals and conference proceedings. Dr. Oppenheim is a member of the National Academy of Engineering, a Life Fellow of the IEEE, a member of Sigma Xi, and Eta Kappa Nu. He has been a Guggenheim Fellow and a Sackler Fellow.

Host

>Isabel Maria Martins Trancoso

 
     
     
 

Prof. Milind Tambe

University of Southern California

AI for Social Good: Learning and Planning in the End-to-End, Data-to-Deployment Pipeline

17 April 2019

13:30, Room 0.19/0.20, IST - Pavilhão de Informática II, Alameda

Abstract

>With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. I will focus on the problems of public safety and security, wildlife conservation and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to strategically deploy our limited intervention resources in these problem domains. I will discuss the importance of conducting this research via building the full data to field deployment end-to-end pipeline rather than just building machine learning or planning components in isolation. Results from our deployments from around the world show concrete improvements over the state of the art. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society.

Bio

>Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC) and the Founding Co-Director of CAIS, the USC Center for Artificial Intelligence in Society, where his research focuses on advancing AI and multiagent systems research for Social Good. He is recipient of the IJCAI (International Joint Conference on AI) John McCarthy Award, ACM/SIGAI Autonomous Agents Research Award from AAMAS (Autonomous Agents and Multiagent Systems Conference), AAAI (Association for Advancement of Artificial Intelligence) Robert S Engelmore Memorial Lecture award, INFORMS Wagner prize, the Rist Prize of the Military Operations Research Society, the Christopher Columbus Fellowship Foundation Homeland security award, International Foundation for Agents and Multiagent Systems influential paper award; he is a fellow of AAAI and ACM. He has also received meritorious Commendation from the US Coast Guard and LA Airport Police, and Certificate of Appreciation from US Federal Air Marshals Service for pioneering real-world deployments of security games. Prof. Tambe has also co-founded a company based on his research, Avata Intelligence , where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.

Host

>Sandra Maria Lopes de Sá

 
     
     
 

Andreas Zeller

CISPA Helmholtz Institute for IT Security

Generating Software Tests

15 April 2019

11:00, Anfiteatro VA4 no piso-1 do Edificio de Civil – IST/Alameda

Abstract

>Software has bugs. What can we do to find as many of these as possible? In this talk, I show how to systematically test software by generating such tests automatically, starting with simple random “fuzzing” generators and then proceeding to more effective grammar-based and coverage-guided approaches. Being fully automatic and easy to deploy, such fuzzers run at little cost, yet are very effective in finding bugs: Our own Langfuzz grammar-based test generator for JavaScript runs around the clock for the Firefox, Chrome, and Edge web browsers and so far has found more than 2,600 confirmed bugs. Our latest test generator prototypes are even able to automatically learn the input language of a given program, which allows to generate highly effective tests for arbitrary programs without any particular setup. In the past months, we have collected our tools and techniques in an interactive textbook (www.fuzzingbook.org) with 10,000 well-documented lines of Python code for highly productive fuzzing.

Bio

>Andreas Zeller is Faculty at the CISPA Helmholtz Center for Information Security, and professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. In 2010, Zeller was inducted as Fellow of the ACM for his contributions to automated debugging and mining software archives, for which he also obtained the ACM SIGSOFT Outstanding Research Award in 2018. His current work focuses on specification mining and test case generation, funded by grants from DFG and the European Research Council (ERC). (https://www.st.cs.uni-saarland.de/zeller/)

Host

>António Manuel Ferreira Rito da Silva

 
     
     
 

Jack Edmonds

Origins of NP and P

20 February 2019

13:30, FA3 – Informatic Department - IST Alameda

Abstract

>NP and P have origins in “the marriage theorem”: A matchmaker has as clients the parents of some boys and some girls where some boy-girl pairs love each other. The matchmaker must find a marriage of all the girls to distinct boys they love or else prove to the parents that it is not possible. The input to this marriage problem is usually imagined as a bipartite graph G with boy nodes, girl nodes, and edges between them representing love. A possible legal marriage of some of the girls to some of the boys is represented by a subset M of the edges of G, called a matching. The matchmaker’s problem is to find a matching which hits all the girl nodes Or else prove to the parents that there is none... Full announcement at https://thor.inesc-id.pt/jack.edmonds/

Bio

>Jack Edmonds is one of the creators of combinatorial optimization. He attended George Washington University before pursuing graduate study at the University of Maryland. He received his master's degree in 1959 and began work at the National Bureau of Standards (NBS). He moved to the University of Waterloo in 1969, where he supervised a dozen PhD students. Throughout his career, he has influenced and assisted numerous young researchers. In the 1960s, Jack Edmonds developed a theory of matroid partition and intersection that still stands as one of the most profound and thorough explorations in the field. He illustrated the deep interconnections between combinatorial minmax theorems, polyhedral structure, duality theory, and efficient algorithms. He published many influential papers on these topics, with the one published in 1972 on theoretical improvements in algorithmic efficiency for network flow problems with Richard Karp leading to one of the most well known algorithms among nowadays CS students. He was awarded the John von Neumann Theory Prize for his contributions as a researcher and educator in 1985. Jack Edmonds retired from teaching in 1999 and was elected into the inaugural Fellows class of the Institute for Operations Research and the Management Sciences. (https://thor.inesc-id.pt/jack.edmonds/bio.txt)

Host

>Alexandre Paulo Lourenço Francisco

 
     
     
 

Norbert Fuhr

University of Duisburg-Essen, Germany

Modeling Interactive Information Retrieval and Social Media Interaction as Stochastic Process

4 February 2019

16:30

Abstract

>Stochastic models have a long history in information retrieval (IR). For modeling sequences of interactions, different variants of Markov models have been proposed by a number of researchers. Here a user moves stochastically between different model states, which are not directly observable; instead, some signal is emitted along with each transition. For applying these models to interactive retrieval, we aim at modeling search progress at a level that is comparable to cognitive models. This allows for user-oriented analysis of interactive IR, for user guidance and for stochastic simulations of interactive IR. As a second application domain, we regard social media interaction, focusing on rumor detection and veracity in Twitter streams. Experimental results from both domains show that these approaches are well-suited for dealing with real-world data.

Bio

>Norbert Fuhr holds a PhD (Dr.) in Computer Science from the Technical University of Darmstadt, which he received in 1986. He became Associate Professor in the computer science department of the University of Dortmund in 1991 and was appointed Full Professor for computer science at the University of Duisburg-Essen in 2002. His past research dealt with topics such as probabilistic retrieval models, the integration of IR and databases, retrieval in distributed digital libraries and XML documents, and user friendly retrieval interfaces. His current research interests are models for interactive retrieval, user-oriented retrieval methods and social media retrieval. Norbert Fuhr has served as PC member and program chair of major conferences in IR and digital libraries, and on the editorial boards of several journals in these areas. In 2012, Norbert Fuhr received the Gerald Salton Award of ACM-SIGIR.

Host

>Mário Jorge Costa Gaspar da Silva

 
     
     
 

Andrew Myers

Cornell University, USA

Mixing Consistency in Geodistributed Transactions

15 January 2019

14:30, Anfiteatro VA4 no piso-1 do Edificio de Civil – IST/Alameda

Abstract

>Programming concurrent, distributed systems that mutate shared, persistent, geo-replicated state is hard. To enable high availability and scalability, a new class of weakly consistent data stores has become popular. However, some data needs strong consistency. We introduce mixed-consistency transactions, embodied in a new embedded language, MixT. Programmers explicitly associate consistency models with remote storage sites; within each atomic, isolated transaction, data can be accessed with a mixture of different consistency models. Compile-time information-flow checking, applied to consistency models, ensures that these models are mixed safely and enables the compiler to automatically partition transactions into a single sub-transaction per consistency model. New run-time mechanisms ensure that consistency models can also be mixed safely, even when the data used by a transaction resides on separate, mutually unaware stores. Performance measurements show that despite offering strong guarantees, mixed-consistency transactions can significantly outperform traditional serializable transactions.

Bio

>Andrew Myers is a Professor in the Department of Computer Science at Cornell University in Ithaca, NY. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1999, advised by Barbara Liskov. His research interests include computer security, programming languages, and distributed and persistent programming systems. His work on computer security has focused on practical, sound, expressive languages and systems for enforcing information security. The Jif programming language makes it possible to write programs which the compiler ensures are secure, and the Fabric system extends this approach to distributed programming. The Polyglot extensible compiler framework has been widely used for programming language research. Myers is an ACM Fellow. He has received awards for papers appearing in POPL'99, SOSP'01, SOSP'07, CIDR'13, PLDI'13, and PLDI'15. Myers is the current Editor-in-Chief for ACM Transactions on Programming Languages and Systems (TOPLAS) and past co-EiC for the Journal of Computer Security. He has also served as program chair or co-chair for a few conferences: ACM POPL 2018, ACM CCS 2016, POST 2014, IEEE CSF 2010, and IEEE S&P 2009.

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Giancarlo Guizzardi

Free University of Bolzano-Bozen, Italy

Conceptual Models as Ontological Contracts

2 July 2018

11:00, IST Alameda: room 0.17 – building Informática II (videoconference with IST TagusPark: room 0.19)

Abstract

>In the years to come, we will experience an increasing demand for building Reference Conceptual Models in critical domains in reality, as well as employing them to address classes of problems, for which sophisticated conceptual distinctions are demanded. One of these key problems is Semantic Interoperability. Effective semantic interoperability requires an alignment between worldviews or, to put it more accurately, it requires the precise understanding of the relation between the (inevitable) ontological commitments assumed by different representations and the systems based on them (including sociotechnical systems). In this talk, I argue that, in this scenario, Reference Conceptual Models should be seen as Ontological Contracts, i.e., as precise descriptions that explicitly represent the Ontological Commitments of a collective of stakeholders sharing a certain worldview. I then elaborate on a number of theoretical, methodological and computational tools required for building these meaning contracts. Firstly, I discuss the importance of Formal Ontology in the philosophical sense and, in particular, I elaborate on the role of foundational axiomatic theories and principles in the design of conceptual modeling languages and methodologies. Secondly, I discuss the role played by three types of complexity management tools that are derived from these foundational theories, namely: Ontological Design Patterns (ODPs) as methodological mechanisms for encoding these ontological theories; Ontology Pattern Languages (OPLs) as systems of representation that take ODPs as higher-granularity modeling primitives; and Ontological Anti-Patterns (OAPs) as structures that can be used to systematically identify possible deviations between the set of valid state of affairs admitted by a model (the actual ontological commitment) and the set of state of affairs actually intended by the stakeholders (the intended ontological commitment). Finally, I illustrate the role played by a particular type of computer-based visual simulation approach in the validation of these reference models as well as for anti-pattern elicitation and rectification.

Bio

>Giancarlo Guizzardi has a PhD (with the highest distinction) from the University of Twente, The Netherlands. He is currently a Professor of Computer Science at the Free University of Bolzano-Bozen, Italy, where he leads the Conceptual and Cognitive Modeling Research Group (CORE). He is also a founder and senior member of the Ontology and Conceptual Modeling Research Group (NEMO), in Brazil. Two well-known results associated with his research program are: the ontologically well-founded version of UML termed OntoUML, which has been adopted by many research, industrial and government institutions worldwide; and the foundational ontology UFO (Unified Foundational Ontology), which has influenced international standardization activities in areas such as Software Engineering and Enterprise Architecture (e.g., the ArchiMate Standard). He has been active for more than two decades in the areas of Ontologies, Conceptual Modeling and Enterprise Semantics. Over the years, he has conducted many technology transfer projects in large organizations in sectors such as Telecommunications, Software Engineering, Digital Advertisement, Product Recommendation, Digital Journalism, Complex Media Management, Energy, among others. Moreover, he has authored more than 220 peer-reviewed publications in the aforementioned areas, which have received more than 12 paper awards. He has also played key roles in international conferences such as general chair (e.g., FOIS), program chair (e.g., ER, FOIS, IEEE EDOC, EEWC) and keynote speaker (e.g., ER, BPM, BIR), as well as in international journals such as associate editor (Applied Ontology) and member of editorial boards (e.g., Requirements Engineering Journal, Semantic Web Journal). Finally, he has been a member of the executive council and is currently a member of the Advisory Board of the International Association for Ontology and its Applications (IAOA).

Host

>José Luis Brinquete Borbinha

 
     
     
 

Prof. K. J. Ray Liu

Electrical and Computer Engineering Department, University of Maryland, College Park

Radio Analytics: The Future Platform for Wireless Positioning, Tracking and Sensing

11 May 2018

11:00, Anfiteatro Abreu Faro, IST

Abstract

>What smart impact will future 5G and IoT bring to our lives? Many may wonder, and even speculate, but do we really know? With more and more bandwidth readily available for the next generation of wireless applications, many more smart applications/services unimaginable today may be possible. In this talk, we will show that with more bandwidth, one can see many multi-paths, which can serve as hundreds of virtual antennas that can be leveraged as new degrees of freedom for smart life. Together with the fundamental physical principle of time reversal to focus energy to some specific positions and the use of machine learning, a revolutionary radio analytic platform can be built to enable many cutting-edge IoT applications that have been envisioned for a long time, but have never been achieved. We will show the world’s first ever centimeter-accuracy wireless indoor positioning systems that can offer indoor GPS-like capability to track human or any indoor objects without any infrastructure, as long as WiFi or LTE is available. Such a technology forms the core of a smart radios platform that can be applied to home/office monitoring/security, radio human biometrics, vital signs detection, wireless charging, and 5G communications. In essence, in the future of wireless world, communication, as we see it, will be just a small component of what’s possible. There are many more magic-like smart applications that can be made possible, allowing us to decipher our surrounding world with a new “sixth sense”. Some demo videos will be shown to illustrate the future of smart radios for smart life.

Bio

>Dr. K. J. Ray Liu was named a Distinguished Scholar-Teacher of University of Maryland, College Park, in 2007, where he is Christine Kim Eminent Professor of Information Technology. He is the founder of Origin Wireless, Inc., a high-tech start-up developing smart radios for smart life. Dr. Liu was a recipient of the 2016 IEEE Leon K. Kirchmayer Award on graduate teaching and mentoring, IEEE Signal Processing Society 2014 Society Award for “influential technical contributions and profound leadership impact", IEEE Signal Processing Society 2009 Technical Achievement Award, and more than a dozen best paper awards. Recognized by Web of Science as a Highly Cited Researcher, he is a Fellow of IEEE and AAAS. Dr. Liu is IEEE Vice President, Technical Activities – Elect, He was Division IX Director of IEEE Board of Director, President of IEEE Signal Processing Society, where he has served as Vice President – Publications and Editor-in-Chief of IEEE Signal Processing Magazine. He also received teaching and research recognitions from University of Maryland including university-level Invention of the Year Award (three times), and college-level Poole and Kent Senior Faculty Teaching Award, Outstanding Faculty Research Award, and Outstanding Faculty Service Award, all from A. James Clark School of Engineering (each award honors one faculty per year from the entire college). (http://www.cspl.umd.edu/kjrliu/)

Host

>Isabel Maria Martins Trancoso

 
     
     
 

Onur Mutlu

ETH Zurich

Rethinking Memory System Design (and the Computing Platforms We Design Around It)

4 December 2017

11:00, IST - anfiteatro FA3

Abstract

>The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM and flash technologies are experiencing difficult technology scaling challenges that make the maintenance and enhancement of their capacity, energy efficiency, and reliability significantly more costly with conventional techniques. In fact, recent reliability issues with DRAM, such as the RowHammer problem, are already threatening system security and predictability. In this talk, we first discuss major challenges facing modern memory systems in the presence of greatly increasing demand for data and its fast analysis. We then examine some promising research and design directions to overcome these challenges. We discuss three key solution directions: 1) enabling new memory architectures, functions, interfaces, via more memory-centric system design, 2) enabling emerging non-volatile memory (NVM) technologies via hybrid and persistent memory systems, 3) enabling predictable memory systems via QoS-aware memory system design. If time permits, we will also discuss research challenges and opportunities in NAND flash memories.

Bio

>Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the William D. and Nancy W. Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, and bioinformatics. He is especially interested in interactions across domains and between applications, system software, compilers, and microarchitecture, with a major current focus on memory and storage systems. A variety of techniques he and his group have invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. His industrial experience spans starting the Computer Architecture Group at Microsoft Research (2006-2009), and various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, faculty partnership awards from various companies, and a healthy number of best paper or "Top Pick" paper recognitions at various computer systems and architecture venues. His computer architecture course lectures and materials are freely available on YouTube, and his research group makes software artifacts freely available online. For more information, please see his webpage at http://people.inf.ethz.ch/omutlu/. (http://people.inf.ethz.ch/omutlu/)

Host

>Leonel Augusto Pires Seabra de Sousa

 
     
     
 

Kurt Mehlhorn

Max-Planck-Institute for Informatics

Certifying Computations: Algorithmics meets Software Engineering

23 October 2017

10:30

Abstract

>I am mostly interested in algorithms for difficult combinatorial and geometric problems: What is the fastest tour from A to B? How to optimally assign jobs to machines? How can a robot move from one location to another one? Algorithms solving such problems are complex and their implementation is error-prone. How can we make sure that our implementations of such algorithms are reliable? Certifying algorithms are a viable approach towards the goal. The top part of the figure above illustrates the I/O behavior of a conventional program for computing a function f . The user feeds an input x to the program and the program returns an output y. Why should the user believe that y is equal to f (x)? A certifying algorithm for f computes y and a witness (proof) w; w proves that the algorithm has not erred for this particular input. The certifying algorithms is accompanied by a checker program C. It accepts the triple (x;y;w) if and only if w is a valid witness for the equality y = f (x). Certifying algorithms are the design principle for LEDA, the library of efficient data types and algorithms ([MN99]). In the first part of the talk, we introduce the concept of certifying algorithms and discuss its significance.. In the second part of the talk, we survey certifying algorithms ([MMNS11]). In the third part of the talk, we discuss the formal verification of certifying computations ([ABMR14, NRM14]).

Bio

>Kurt Mehlhorn heads the department "Algorithms and Complexity" at the Max Planck Institute for Informatics in Saarbrücken. He is author or co-author of approx. 300 research papers and has published six books. 80 of his former students now hold professorships. Mehlhorn has been awarded a number of prestigious research prizes. These include the Leibniz Prize of the German Research Foundation in 1986 and the Konrad Zuse Medal in 1995. He was Vice President of the Max Planck Society from 2002 to 2008. In 2014, he celebrated his 30th anniversary as a university teacher. He is a member of the Leopoldina, the National Academy of Sciences, of acatech, the National Academy of Science and Engineering, the Academia Europaea, and the Indian National Academy of Engineering. Last year, members of the US National Academy of Science appointed him as a "Foreign Associate" in their ranks. He is the second European computer scientist to be honored this way. In 1995 he co-founded the Algorithmic Solution Software GmbH.

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Prof. Don Norman

University of California

People-Centered Design. Why it matters?

7 July 2017

14:30, IST - Centro de Congressos

Abstract

>At the new Design Lab at UC San Diego, Design is a way of thinking, understanding people real, fundamental needs, and designing systems that fulfill those needs in an understandable, enjoyable manner. Does it matter? Yes. Medical error is the second largest cause of death in healthcare (alongside cancer and heart attack). And most of this error is caused by poor design of instruments, devices, and procedures. Autonomous cars promise to save lives, but how do pedestrians interact when the cars have no drivers? In this lecture, I describe some of the problems we are studying, including healthcare and autonomous automobiles, showing how we approach these issues. We practice a philosophy of people-centered design where we start by observation, then progress to deep analysis of the underlying issues, to rapid prototypes (in hours), testing, and continual iteration. Engineers and computer scientists need to understand these principles. Engineers often make the mistake of being far too logical. What do I mean? Come to the discussion.

Bio

>Don Norman is Founder and Director of the Design Lab at the University of California, San Diego. He was co-founder and first chair of the Cognitive Science Department and prior to that, chair of Psychology. He has been a Vice President of Advanced technology at Apple and an executive at HP. He is co-founder and principal of the Nielsen Norman group, a member of the National Academy of Engineering, fellow of the American Academy of Arts and Sciences, Cognitive Science Society, ACM, Human Factors and Ergonomics Society and Design Research Society. He is an IDEO fellow, and a trustee of IIT’s Institute of design. He serves on company boards, has honorary degrees from Delft, Padua, and San Marino, the lifetime achievement award from ACM’s Computer-Human Interaction group, and the President’s lifetime achievement award from the Human Factors and Ergonomics Society. He has published 20 books translated into 20 languages including Emotional Design and Design of Everyday Things. He can be found at www.jnd.org (https://www.jnd.org/)

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Prof. Dennis Shasha

Courant Institute of New York University

VersionClimber: an algorithm and system for package evolution in data science

30 June 2017

11:00, IST - anfiteatro VA1

Abstract

>Imagine you are a data scientist (as many of us are/have become). Systems you build typically require many data sources and many packages (machine learning/data mining, data management, and visualization) to run. Your working configuration will consist of a set of packages each at a particular version.You want to update some packages (software or data) to their most recent versions possible, but you want your system to run after the upgrades, thus perhaps entailing changes to the versions of other packages. One approach is to hope the latest versions of all packages work.If that fails, the fallback is manual trial and error, but that quickly ends in frustration. We advocate a provenance-style approach in which tools like ptrace enable us to identify version combinations of different packages. Then version control systems like pip, and github and VirtualEnv enable us to fetch particular versions of packages and try them in a sandbox-like environment. Because the space of versions to explore grows exponentially with the number of packages, we have developed a memoizing algorithm that avoids exponential search while still finding an optimum version combination. Heuristics combined with certain empirical facts about packages (e.g. local upward compatibility) improves performance further still. We present experimental results on well known packages used in data science to illustrate the effectiveness of our approach.

Bio

>Dennis Shasha is a professor of computer science at the Courant Institute of New York University and an Associate Director of NYU Wireless. He works with biologists on pattern discovery for network inference; with computational chemists on algorithms for protein design; with physicists and financial people on algorithms for time series; on clocked computation for DNA computing; and on computational reproducibility. Other areas of interest include database tuning as well as tree and graph matching. Because he likes to type, he has written six books of puzzles about a mathematical detective named Dr. Ecco, a biography about great computer scientists, and a book about the future of computing. He has also written five technical books about database tuning, biological pattern recognition, time series, DNA computing, resampling statistics, and causal inference in molecular networks. He has co-authored over eighty journal papers, seventy conference papers, and twenty-five patents.He has written the puzzle column for various publications including Scientific American, Dr. Dobb's Journal, and the Communications of the ACM. He is a fellow of the ACM and an INRIA International Chair. (http://www.cs.nyu.edu/shasha/)

Host

>Helena Isabel de Jesus Galhardas

 
     
     
 

Prof. Carla Gomes

Cornell University

Computational Sustainability: Computing for a Better World

9 June 2017

14:00, Anfiteatro Abreu Faro, Instituto Superior Técnico

Abstract

>Computational sustainability is a new interdisciplinary research field with the overarching goal of developing computational models, methods, and tools to help manage the balance between environmental, economic, and societal needs for a sustainable future. I will provide an overview of computational sustainability, with examples ranging from wildlife conservation and biodiversity, to poverty mitigation and materials discovery for renewable energy materials. I will also highlight cross-cutting computational themes and challenges in Artificial Intelligence to address sustainability problems.

Bio

>Carla Gomes is a Professor of Computer Science and the director of the Institute for Computational Sustainability at Cornell University. Gomes obtained a Ph.D. in computer science in the area of artificial intelligence and operations research from the University of Edinburgh and an MSc. from the University of Lisbon. Her research area is Artificial Intelligence with a focus on large-scale constraint reasoning, optimization, and machine learning. Recently, Gomes has become deeply immersed in research in the new field of Computational Sustainability. From 2007-2013 Gomes led an NSF Expeditions-in-Computing in Computational Sustainability that nucleated the new field of Computational Sustainability. Gomes is currently the lead PI of a new NSF Expeditions-in-Computing that established CompSustNet, a large-scale national and international research network, to further expand the field and Computational Sustainability. Gomes is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and a Fellow of American Association for the Advancement of Science (AAAS). (http://www.cs.cornell.edu/gomes/)

Host

>Maria Inês Camarate de Campos Lynce de Faria

 
     
     
 

Prof. Marija Ilic

Carnegie Mellon University

Toward a Unified Approach to Sustainable and Resilient Electric Energy Systems - Modeling, Control and Testbeds

23 September 2016

11:00, IST - Auditório Abreu Faro

Abstract

>In this talk we present the changing objectives of the electric energy systems as complex dynamical systems. We briefly provide the basic landscape in the industry first. We take a broader look at the objectives of deploying cyber into these physical systems in light of viewing them as general socio-ecological systems (SES). We highlight that in the approach taken by late Elinor Ostrom on how one can assess sustainability of a complex SES, key metric concerns interactions between different system members. Motivated by her work, we discuss modeling of dynamical interactions within a physical electric power system. We highlight that existence of such physical interaction variables can be proven from the most general conservation laws between any component and the rest of the system. This becomes the basis for proposing a transformed state space for general modeling of electric energy systems; the lowest component level is modeled in terms of technology-specific physical variables, and local control design is done so that the interaction variable dynamics with the rest of the system is as specified. The higher-level model which captures dynamics of the interactions within an interconnected system and does not have to know the details about the internal variables of individual (groups of) component(s). Once this is understood it becomes straightforward to define what must be exchanged in electricity markets, and one can interpret distributed bidding and market clearing using this higher level model only. In this sense electricity markets could and should become technology agnostic. Similarly, it becomes possible to design protocols/standards for cyber design to enable robust/resilient system operation over broad ranges of operating conditions and equipment status. In closing, our Smart Grid in a Room Simulator (SGRS) under development at CMU in collaboration with NIST is fundamentally based on this multi-layered modeling. As such it sets the basis for simulating electricity markets, their effects on physical system response. We have used it now over several years for demonstrating novel control concepts introduced by several doctoral students at CMU.

Bio

>Dr. Marija Ilić holds a joint appointment at Carnegie Mellon as Professor of Electrical & Computer Engineering and Engineering & Public Policy, where she has been a tenured faculty member since October 2002. Her principal fields of interest include electric power systems modeling; design of monitoring, control, and pricing algorithms for electric power systems; normal and emergency control of electric power systems; control of large scale dynamic systems; nonlinear network and systems theory; modeling and control of economic and technical interactions in dynamical systems with applications to competitive energy systems. Dr. Ilić received her M.Sc. and D.Sc. degrees in Systems Science and Mathematics from Washington University in St. Louis and earned her MEE and Dip. Ing. at the University of Belgrade. She is an IEEE Fellow and an IEEE distinguished lecturer, as well as a recipient of the First Presidential Young Investigator Award for Power Systems. In addition to her academic work, Dr. Ilić is a consultant for the electric power industry and the founder of New Electricity Transmission Software Solution, Inc. (NETSS, Inc.). From September 1999 until March 2001, Dr. Ilić was a Program Director for Control, Networks and Computational Intelligence at the National Science Foundation. Prior to her arrival at Carnegie Mellon, Dr. Ilić held the positions of Visiting Associate Professor and Senior Research Scientist at Massachusetts Institute of Technology. From 1986 to 1989, she was a tenured faculty member at the University of Illinois at Urbana-Champaign, where she taught since 1984. She has also taught at Cornell and Drexel. She has worked as a visiting researcher at General Electric and as a principal research engineer in Belgrade. Dr. Ilić has co-authored several books on the subject of large-scale electric power systems: Ilić and Zaborsky, Dynamics and Control of Large Electric Power Systems, John Wiley & Sons, Inc., 2000; Ilić, Galiana, and Fink (eds.), Power Systems Restructuring: Engineering and Economics, Kluwer Academic Publishers, 2nd printing, 2000; Allen and Ilić, Price-Based Commitment Decisions in the Electricity Markets, Springer-Verlag London Limited, 1999; and Ilić and Liu, Hierarchical Power Systems Control: Its Value in a Changing Industry, Springer-Verlag London Limited, 1996. Dr. Ilić has also served as an associate editor for a multi-volume Encyclopedia of Energy, Cutler J. Cleveland (ed.), Elsevier Inc., 2004 and a co-editer of Control and Optimization Methods in Smart Grids (Springer, 2011). Her most recent book, Engineering IT-Enabled Sustainable Electricity Services: The Tale of Two Low-Cost Green Azores Islands, appeared from Springer in August 2013. Recently, Professor Ilić developed and taught a course in Electric Energy Processing and co-developed/co-taught a course entitled, “Electric Power Systems Reading Seminar.” (See http://www.ece.cmu.edu/~nsf-education .) She is the PI of a major NSF ITR award (see http://www.ece.cmu.edu/~nsf-itr ) and the co-PI of an interdisciplinary DOE grant entitled, “Bundling Energy Systems of the Future.” She has co-organized an annual multidisciplinary Electricity Industry conference series at Carnegie Mellon (ECE, EPP, and Tepper) with participants from academia, government, and industry; the conference is looking forward to its ninth year in 2013 (see http://www.ece.cmu.edu/~electricityconference ).Dr. Ilić is the Director of the Electric Energy Systems Group at Carnegie Mellon (http://www.eesg.ece.cmu.edu ), Director of the SRC ERI/Smart Grid Research Center (http://www.src.org/program/eri/sgrc/),and the Honorary Chaired Professor for control of Future Electricity Network Operations at Delft University of Technology, the Netherlands. Professor Ilic received from the Carnegie Institute of Technology at Carnegie Mellon University the Phillip L. Dowd Fellowship Teaching Award in 2010 and the Steven J. Fenves Award for Systems Research in 2012. (https://users.ece.cmu.edu/~milic/)

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Prof. Pedro Domingos

University of Washington

The Five Tribes of Machine Learning, and What You Can Take from Each

27 June 2016

14:30, IST alameda, anfiteatro Abreu Faro

Abstract

>There are five main schools of thought in machine learning, and each has its own master algorithm – a general-purpose learner that can in principle be applied to any domain. The symbolists have inverse deduction, the connectionists have backpropagation, the evolutionaries have genetic programming, the Bayesians have probabilistic inference, and the analogizers have support vector machines. What we really need, however, is a single algorithm combining the key features of all of them. In this talk I will describe my work toward this goal, including in particular Markov logic networks, and speculate on the new applications that such a universal learner will enable, and how society will change as a result.

Bio

>Pedro Domingos is a professor of computer science at the University of Washington and the author of "The Master Algorithm". He is a winner of the SIGKDD Innovation Award, the highest honor in data science. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and has received a Fulbright Scholarship, a Sloan Fellowship, the National Science Foundation’s CAREER Award, and numerous best paper awards. He received his Ph.D. from the University of California at Irvine and is the author or co-author of over 200 technical publications. He has held visiting positions at Stanford, Carnegie Mellon, and MIT. He co-founded the International Machine Learning Society in 2001. His research spans a wide variety of topics in machine learning, artificial intelligence, and data science, including scaling learning algorithms to big data, maximizing word of mouth in social networks, unifying logic and probability, and deep learning. (http://homes.cs.washington.edu/~pedrod/)

Host

>Maria Inês Camarate de Campos Lynce de Faria

 
     
     
 

Prof. Srini Devadas

MIT

Tardis: Time Traveling Coherence Algorithm for Distributed Shared Memory

6 April 2016

15:00, IST alameda, anfiteatro VA4

Abstract

>(Work done with Xiangyao Yu) <p>A new memory coherence protocol, Tardis, is presented. Tardis uses timestamp counters representing logical as opposed to physical time to order memory operations and enforce sequential consistency in any type of shared memory system. Tardis is unique in that as compared to the widely-adopted directory coherence protocol, and its variants, it completely avoids multicasting and only requires O(log N) storage per cache block for an N-core system rather than O(N) sharer information. Tardis is simpler and easier to reason about, yet achieves similar performance to directory protocols on a wide range of benchmarks run on 16, 64 and 256 cores.

Bio

>Srini Devadas is the Webster Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT) where he has been on the faculty since 1988. He served as Associate Head of the Department of Electrical Engineering and Computer Science, with responsibility for Computer Science, from 2005 to 2011. Devadas's research interests span Computer-Aided Design (CAD), computer security and computer architecture and he has received significant awards from each discipline. He is a Fellow of the ACM and IEEE. He is a MacVicar Faculty Fellow at MIT, considered the institute's highest teaching honor. (https://people.csail.mit.edu/devadas/)

Host

>José Carlos Alves Pereira Monteiro

 
     
     
 

Prof. Mike Hinchey

Lero - the Irish Software Research Centre <p>University of Limerick, Ireland

Evolving Critical Systems

9 December 2015

14:00, IST Alameda, anfiteatro VA1

Abstract

>Increasingly software can be considered to be critical, due to the business or other functionality which it supports. Upgrades or changes to such software are expensive and risky, primarily because the software has not been designed and built for ease of change. Expertise, tools and methodologies which support the design and implementation of software systems that evolve without risk (of failure or loss of quality) are essential. We address a research agenda for building software in computer-based systems that (a) is highly reliable and (b) retains this reliability as it evolves, either over time or at run-time and illustrate this with a complex example from the domain of space exploration.

Bio

>Mike Hinchey is Director of Lero—the Irish Software Research Centre and Professor of Software Engineering at University of Limerick, Ireland. Prior to joining Lero, Professor Hinchey was Director of the NASA Software Engineering Laboratory. In 2009 he was awarded NASA’s Kerley Award as Innovator of the Year. Hinchey holds a B.Sc. in Computer Systems from University of Limerick, an M.Sc. in Computation from University of Oxford and a PhD in Computer Science from University of Cambridge. The author/editor of more than 15 books and over 200 articles on various aspects of Software Engineering, at various times Hinchey previously held positions as Full Professor in Australia, UK, Sweden and USA. He is a Chartered Engineer, Chartered Engineering Professional, Chartered Mathematician and Charted Information Technology Professional, as well as a Fellow of the IET, British Computer Society and Irish Computer Society. He is President-Elect of IFIP (International Federation for Information Processing) and will serve as its President from 2016 to 2019. (http://www.lero.ie/people/directors)

Host

>Maria Inês Camarate de Campos Lynce de Faria

 
     
     
 

Prof. Frank McSherry

ETH Zurich

Next-generation data-parallel dataflow systems

28 September 2015

11:00, IST Alameda, anfiteatro EA3

Abstract

>The Naiad project at Microsoft Research introduced a new model of dataflow computation, timely dataflow, which was designed to support low-latency computation in data-parallel dataflow graphs containing structured cycles. This model substantially enlarged the space of data-parallel computations that can be reasonably expressed, as compared to other modern “big data” systems. Naiad achieved excellent performance it its intended application domains, largely by providing the dataflow operators with meaningful and low-overhead coordination primitives, but otherwise staying out of their way. In this talk we will discuss performance issues with existing systems, review timely dataflow, and present a new data-parallel design that coordinates less frequently yet more accurately. The design is largely implemented, written in 100% safe Rust and available at https://github.com/frankmcsherry/timely-dataflow, and currently out-performs several popular distributed systems even when run on the speaker's laptop. This talk reflects work done jointly with Derek Murray, Rebecca Isaacs, Michael Isard, Paul Barham, and Martin Abadi. The photo credit is due to Mihai Budiu.

Bio

>Frank McSherry is currently visiting ETH Zurich, and is formerly affiliated with Microsoft Research, Silicon Valley. While there he led the Naiad project, which introduced both differential and timely dataflow, and remains one of the top-performing big data platforms. He also works with differential privacy, due in part to its interesting relationship to data-parallel computation. Frank currently enjoys spending his time in places other than Silicon Valley. (http://www.frankmcsherry.org)

Host

>Rodrigo Seromenho Miragaia Rodrigues

 
     
     
 

Prof. Jorge M. Pacheco

Universidade do Minho

More is different: how complex networks lead to new emergent social behavior

4 June 2015

14:00, IST, anfiteatro EA3

Abstract

>The holy-grail of computational social science is to understand how societies behave as a collective, knowing how individuals interact with each-other. Conversely, if all we know is how societies behave collectively (as happens all too often in micro-biology) is there anything we can say about how individuals interact with each-other ? In this talk I will describe how to use massive multi-agent computer simulations to establish a reversible link between individual and collective behavior in large communities. The individual behavior of agents will be modeled by means of a well defined social dilemma of cooperation. Agents will interact along the links of a complex, adaptive social network that co-evolves with individual behavior. I will show that adaptive social networks act to change the dilemma that individuals engage on, revealing a very different behavior at a global level. The fact that this computational link between individual and collective behavior is reversible, proves its usefulness across different disciplines of science.

Bio

>Jorge Pacheco is Professor of Mathematics at the University of Minho and also a member of the Centre of Molecular and Environmental Biology at the same University. Graduated in Physics in Coimbra, and with a PhD in Theoretical Physics at the Niels Bohr Institute, in Copenhagen. He is carrying out research in a variety of topics, ranging from quantum many-body physics to the mathematical & computational description of evolutionary processes such as cancer, evolution of cooperation, urban development & complexity and complex networks. His interest in environmental problems led him to investigate, more recently, how one should optimize governance in connection to climate change, as well as how to reduce our carbon footprint when installing massive parallel computational infrastructures. A survey of publications can be found in google scholar: http://scholar.google.com/citations?user=3YDAC58AAAAJ&hl=en (https://sites.google.com/site/jorgempacheco/)

Host

>Francisco João Duarte Cordeiro Correia dos Santos

 
     
     
 

Prof. Pascal Felber

Université de Neuchâtel, Institut d'informatique

Privacy-Preserving Event Stream Processing in the Cloud

22 January 2015

10:00, IST, room EA1

Abstract

>Stream processing provides an appealing paradigm for building large-scale distributed applications. Such applications are often deployed over multiple administrative domains, some of which may not be trusted. Recent attacks in public clouds indicate that a major concern in untrusted domains is the enforcement of privacy. In this talk we will primarily focus on the problem of content-based routing (CBR), which is at the core of many event stream processing systems. By routing data based on subscriptions evaluated on the content of publications, CBR systems can expose critical information to unauthorized parties. Information leakage can be avoided by the means of privacy-preserving filtering, which is supported by several mechanisms for encrypted matching. Unfortunately, existing approaches have in common a high performance overhead and the difficulty to use classical optimization. We will present and discuss mechanisms that greatly reduces the cost of supporting privacy-preserving filtering based on encrypted matching operators.

Bio

>Pascal Felber received his M.Sc. and Ph.D. degrees in Computer Science from the Swiss Federal Institute of Technology. From 1998 to 2002, he has worked at Oracle Corporation and Bell-Labs (Lucent Technologies) in the USA. From 2002 to 2004, he has been an Assistant Professor at Institut EURECOM in France. Since October 2004, he is a Professor of Computer Science at the University of Neuchâtel, Switzerland, working in the field of dependable and distributed systems. He has published over 100 research papers in various journals and conferences. (http://members.unine.ch/pascal.felber/index.html)

Host

>Paulo Jorge Pires Ferreira

 
     
     
 

Prof. Michael Wooldridge

University of Oxford

Folk Theorems for Multi-Agent Systems

28 October 2014

11:00, IST @Taguspark, room 0.65

Abstract

>The Nash Folk Theorems are a collection of related results that characterise the Nash equilibria that can be sustained in repeated games. As the name suggests, the Folk Theorems are technically simple, but this simplicity belies the fact that they are of enormous significance. For example, it has been argued that they provide answers to fundamental questions relating to the structure and behaviour of human societies. In this talk, I will introduce the Folk Theorems, and then show how they can be applied in the context of multi-agent systems, to understand the equilibria that can be obtained in such systems.

Bio

>I am a Professor of Computer Science in the Department of Computer Science at the University of Oxford, and a Senior Research Fellow at Hertford College. I joined Oxford on 1 June 2012; before this I was for twelve years a Professor of Computer Science at the University of Liverpool. In October 2011, I was awarded a 5-year ERC Advanced Grant, entitled "Reasoning About Computational Economies" (RACE). I am a AAAI Fellow, an ECCAI Fellow, an AISB Fellow, and a BCS Fellow. In 2006, I was the recipient of the ACM Autonomous Agents Research Award. In 1997, I founded AgentLink, the EC-funded European Network of Excellence in the area of agent-based computing. I was program chair for the 19th European Conference on Artificial Intelligence (ECAI-2010), held in Lisbon, Portugal, in August 2010. I will be General Chair for the 24th International Joint Conference on Artificial Intelligence (IJCAI-2015), to be held in Buenos Aires, Argentina. Between 2003 and 2009 I was co-editor-in-chief of the Journal Autonomous Agents and Multi-Agent Systems. I am an associate editor of the Journal of Artificial Intelligence Research (JAIR) (2006-2009, 2009-2012), an associate editor of Artificial Intelligence journal (2009-2012) and serve on the editorial boards of the Journal of Applied Logic, Journal of Logic and Computation, Journal of Applied Artificial Intelligence, and Computational Intelligence. (http://www.cs.ox.ac.uk/people/michael.wooldridge/)

Host

>Ana Maria Severino de Almeida e Paiva

 
     
     
 

Prof. Shrikanth Narayanan

University of Southern California, USA

Behavioral Signal Processing: Enabling human-centered behavioral informatics

30 June 2014

14:30, IST, anfiteatro Abreu Faro

Abstract

>Audio-visual data have been a key enabler of human behavioral research and its applications. The confluence of sensing, communication and computing technologies is allowing capture and access to data, in diverse forms and modalities, in ways that were unimaginable even a few years ago. Importantly, these data afford the analysis and interpretation of multimodal cues of verbal and non-verbal human behavior. These signals carry crucial information about not only a person’s intent and identity but also underlying attitudes and emotions. Automatically capturing these cues, although vastly challenging, offers the promise of not just efficient data processing but in tools for discovery that enable hitherto unimagined insights. Recent computational approaches that have leveraged judicious use of both data and knowledge have yielded significant advances in this regards, for example in deriving rich, context-aware information from multimodal sources including human speech, language, and videos of behavior. This talk will focus on some of the advances and challenges in gathering such data and creating algorithms for machine processing of such cues. It will highlight some of our ongoing efforts in Behavioral Signal Processing (BSP)—technology and algorithms for quantitatively and objectively understanding typical, atypical and distressed human behavior—with a specific focus on communicative, affective and social behavior. The talk will illustrate Behavioral Informatics applications of these techniques that contribute to quantifying higher-level, often subjectively described, human behavior in a domain-sensitive fashion. Examples will be drawn from health and well being realms such as Autism, Couple therapy and Addiction counseling.

Bio

>Shrikanth (Shri) Narayanan is Andrew J. Viterbi Professor of Engineering at the University of Southern California, where he is Professor of Electrical Engineering, Computer Science, Linguistics and Psychology and Director of the Ming Hsieh Institute. Prior to USC he was with AT&T Bell Labs and AT&T Research. His research focuses on human-centered information processing and communication technologies. He is a Fellow of the Acoustical Society of America, IEEE, and the American Association for the Advancement of Science (AAAS). Shri Narayanan is an Editor for the Computer, Speech and Language Journal and an Associate Editor for the IEEE Transactions on Affective Computing, the Journal of Acoustical Society of America and the APISPA Transactions on Signal and Information Processing having previously served an Associate Editor for the IEEE Transactions of Speech and Audio Processing (2000-2004), the IEEE Signal Processing Magazine (2005-2008) and the IEEE Transactions on Multimedia (2008-2012). He is a recipient of several honors including the 2005 and 2009 Best Transactions Paper awards from the IEEE Signal Processing Society and serving as its Distinguished Lecturer for 2010-11. With his students, he has received a number of best paper awards including winning the Interspeech Challenges in 2009 (Emotion classification), 2011 (Speaker state classification), 2012 (Speaker trait classification) and in 2013 (Paralinguistics/Social Signals). He has published over 600 papers and has been granted 16 U.S. patents. (http://sail.usc.edu/shri.php)

Host

>Isabel Maria Martins Trancoso

 
     
     
 

Prof. Peter Pietzuch

Department of Computing, Imperial College London

Elastic and Fault-Tolerant Stream Processing in the Cloud

27 May 2014

09:30, IST, room QA1.3 (south tower)

Abstract

>As users of "big data" applications want fresh processing results, we witness a new breed of stream processing systems that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the "pay-as-you-go" model of cloud computing, they must scale out on demand; (ii) with deployments on hundreds of virtual machines (VMs), failures are common -- systems must therefore be fault-tolerant with fast recovery times. An open question is how to achieve these two goals when stream queries include stateful operators whose state may depend on the complete history of the stream. In this talk, I describe an integrated approach for dynamic scale out and recovery of stateful stream processing operators. The idea is to expose internal operator state explicitly to the stream processing system through a set of state management primitives. Externalised operator state is checkpointed periodically and backed up by the system. In addition, the system identifies operator bottlenecks and automatically scales them out by allocating new VMs. We evaluate this approach as part of the SEEP experimental stream processing system on the Amazon EC2 cloud platform and show that it can scale automatically, while recovering quickly from failures. (This talk is based on work published at ACM SIGMOD'13 and USENIX ATC'14.)

Bio

>Peter Pietzuch is a Senior Lecturer (Associate Professor) at Imperial College London, leading the Large-scale Distributed Systems (LSDS) group in the Department of Computing. His research focuses on the design and engineering of scalable, reliable and secure large-scale software systems, with a particular interest in data management and networking issues. He has published over sixty research papers in international venues, including USENIX ATC, NSDI, SIGMOD, VLDB, ICDE, ICDCS, Middleware and DEBS. He has co-authored a book on Distributed Event-based Systems published by Springer. Before joining Imperial College, he was a post-doctoral fellow at Harvard University. He holds PhD and MA degrees from the University of Cambridge. (http://www.doc.ic.ac.uk/~prp/)

Host

>Paulo Jorge Pires Ferreira

 
     
     
 

Prof. Paulo Veríssimo

University of Lisbon, Portugal

What happens when you let reality inspire your research?

7 April 2014

11:00, IST, room QA1.3 (Torre Sul)

Abstract

>It is not often that one finds concrete problems capable of inspiring really advanced research. Computing and communications, having become commodities which societies largely depend on, created such an opportunity in what concerns their security and dependability. Yet, since problems appear so real, one can always follow the temptation of identifying the immediate problems and look for immediate solutions. <p>This talk is about daring to ask questions about daring subjects in distributed systems, fault tolerance and security, and on how this impacted the research of a group over the past few years. The talk will start by giving a notion of the security and dependability risks impending on modern societies and their ICT systems, in crucial areas as telco and cloud, power grid and cyber-physical systems in general, or health and genomics data. Then it presents the results of several research projects which have asked some of those daring questions, e.g. about why not: letting your attackers live amongst you; self-healing computers to make them work forever; recovering information despite having lost most of it; putting sensitive information in clouds without trusting the providers; or publishing genomics information whilst preserving privacy. The last part of the talk will discuss how proposing to tackle problems of real substance and impact, ended-up inspiring new theoretical models and predicates for distributed systems, impossibility results and algorithmic lower bounds. What more can you ask? <p>In conclusion: Solving real problems does not necessarily prevent you from doing really advanced research, if instead of merely «seeing things and saying 'Why?'», you ask the right questions and are capable of «dreaming things that never were, and say, "Why not?"» Quoting George Bernard Shaw, "Back to Methuselah" (1921)

Bio

>Paulo Veríssimo is a Professor of the Department of Computer Science and Engineering, U. of Lisbon Faculty of Sciences (FCUL-http://www.di.fc.ul.pt/~pjv), adjunct Professor of the ECE Dept., Carnegie Mellon University, member elect of the Board of the U. of Lisbon and of the Scientific Council of the FCUL, and Director of LaSIGE (http://lasige.di.fc.ul.pt). He is currently Chair of the IFIP WG 10.4 on Dependable Computing and Fault-Tolerance and vice-Chair of the Steering Committee of the IEEE/IFIP DSN conference. PJV is Fellow of the IEEE and of the ACM. He is associate editor of the Elsevier Int’l Journal on Critical Infrastructure Protection. Veríssimo leads the Navigators group of LaSIGE, and is currently interested in distributed architectures, middleware and algorithms for: adaptability and safety of real-time networked embedded systems; and resilience of secure and dependable large-scale systems. He is author of over 170 peer-refereed publications and co-author of 5 books. (http://www.di.fc.ul.pt/~pjv/)

Host

>Luís Eduardo Teixeira Rodrigues

 
     
     
 

Prof. Luis Ceze

University of Washington, USA

Disciplined Approximate Computing: From Language to Hardware, and Beyond

14 March 2014

11:00, Grande Auditório do Pavilhão de Civil no IST

Abstract

>Energy is increasingly a first-order concern in computer systems. Exploiting energy-accuracy trade-offs is an attractive choice in applications that can tolerate inaccuracies. A key challenge, though, is how to isolate parts of the program that must be precise from those that can be approximated so that a program functions correctly even as quality of service degrades. Addressing that challenge leads to opportunities for approximate computing across the entire system stack. In this talk I will describe our effort on co-designing language, hardware and system support to take advantage of approximate computing across the system stack in a safe and efficient way. We use type qualifiers to declare data that may be subject to approximate computation. Using these types, the system automatically maps approximate variables to potentially imprecise and unreliable but much more efficient storage and data operations, as well as more energy-efficient algorithms provided by the programmer. In addition, the system can statically guarantee isolation of the precise program component from the approximate component. This allows a programmer to control explicitly how information flows from approximate data to precise data. Importantly, employing static analysis eliminates the need for dynamic checks, further improving energy savings. I will describe a micro-architecture that offers explicit approximate storage and computation and a proposal on using neural networks as approximate accelerators for general programs. I will conclude with an overview of our current/future research directions, including language extensions for quality-of-result specification, programming tools, approximate persistent storage and approximate wireless communication. <p>This Distinguished Lecture Series is organized within a special partnership with the JEEC2014 (http://groups.ist.utl.pt/jeec/jeec14/)

Bio

>Luis Ceze is an Associate Professor in the Computer Science and Engineering Department at the University of Washington. His research focuses on computer architecture, programming languages and OS to improve the programmability, reliability and energy efficiency of multiprocessor systems. He has co-authored over 60 papers in these areas, and had several papers selected as IEEE Micro Top Picks and CACM research Highlights. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship and the 2013 IEEE TCCA Young Computer Architect Award. He consults for Microsoft Research and co-founded Corensic and Konyac, both UW-CSE spin-off companies. He was born and raised in Sao Paulo, Brazil, where it drizzles all the time; he now (mostly) lives in the similarly drizzly Seattle. When he is not working he is found either eating or cooking. (http://homes.cs.washington.edu/~luisceze/)

Host

>Leonel Augusto Pires Seabra de Sousa

 
     
     
 

Dr. Jeremy Bennett

Embecosm, UK

Talk1:The OpenRISC experience & Talk2:Machine Guided Energy Efficient Compilation

7 February 2014

14:30, Anfiteatro do Complexo Interdisciplinar no IST

Abstract

>Dr. Jeremy Bennett brings us two short lectures: the first one is under the theme "Free softcores, tools and toolchains: The OpenRISC experience", the second short lecture is about "MAGEEC: Machine Guided Energy Efficient Compilation". <p>Free softcores, tools and toolchains: The OpenRISC experience: <p>In this talk we will look at the availability of free and open source softcores, EDA tools and compiler tool chains. Central to this will be a presentation of the OpenRISC 1000, a fully open 32/64-bit RISC processor architecture. Inspired by the MIPS and DLX architectures, the OpenRISC 1000 has many Verilog implementations and is used in a wide range of commercial products including Samsung set top boxes, NXP Jennic Zigbee chips and NASA's TechEdSat, which flew in 2012/13. In addition to the design and Verilog implementations being fully open, the processor is supported by open source front-end EDA tools such as Icarus Verilog and Verilator. It has a comprehensive and robust GNU tool chain, with an experimental LLVM tool chain also available. Linux and a wide-range of RTOS are supported. As well as describing the engineering implementation, the talk will look at how such an open design has been successful in a commercial environment, the business models that are most appropriate to such an open source approach, and where such business models can fail. <p>MAGEEC: Machine Guided Energy Efficient Compilation: <p>We are used to compilers which optimize for execution speed, and (in the embedded sector) for code size. In 2012 James Pallister of Bristol University and Embecosm led the seminal research project which demonstrated conclusively that compiler optimization has a major impact on the energy consumed by the generated code (http://comjnl.oxfordjournals.org/content/early/2013/11/11/comjnl.bxt129.abstract?keytype=ref&ijkey=aA4RYlYQLNVgkE3). This finding has immense potential for data center power usage, for battery life of consumer devices, for the efficiency of devices relying on energy scavenging and for remote sensing, where batteries must last for years at a time. In this short talk, we will explore how compiled programs consume energy and the opportunities for compiler optimization to reduce energy consumption. We will provide an introduction to MAGEEC, an 18-month project supported by the UK Technology Strategy Board, which uses machine learning to select compiler optimizations that will yield the most energy efficient compiled code.

Bio

>Dr Jeremy Bennett is Embecosm’s founder, an expert on silicon chip modeling, source level debuggers and compilers, for which Embecosm provides commercial support services. A former academic, Jeremy holds a MA and PhD from Cambridge University and is a Member of the British Computer Society, Chartered Engineer, Chartered Information Technology Professional and Fellow of the Royal Society of Arts. He is the author of the standard textbook, “Introduction to Compiling Techniques” (McGraw-Hill 1990, 1996, 2003). (http://www.jeremybennett.com/)

Host

>José João Henriques Teixeira de Sousa

 
     
     
 

Prof. Rodrigo Rodrigues

Universidade Nova de Lisboa, Portugal

Head in the clouds: an overview of cloud computing and some associated research challenges

24 January 2014

10:00, IST, room EA3

Abstract

>Cloud computing is a fast growing, multi-billion dollar industry, with several forecasts predicting an annual growth rate for this market that is well above 20% during the remainder of the current decade. In this talk I will give an overview of cloud computing, its history, its particular characteristics that distinguish it from conventional distributed systems, and some interesting research challenges that stem from these distinctive features. I will also present two projects that members of my group have worked on over the past few years addressing some of these research challenges, and discuss some open problems in this area.

Bio

>Rodrigo Rodrigues is an associate professor at the Universidade Nova de Lisboa and a member of the NOVA-LINCS research lab. Previously, he was a tenure-track faculty at the Max Planck Institute for Software Systems (MPI-SWS) where he led the Dependable Systems Group, and an assistant professor at the Instituto Superior Técnico (IST) and a researcher at INESC-ID. He graduated from the Massachusetts Institute of Technology with a doctoral degree in 2005. During his PhD, he was a researcher at MIT's Computer Science and Artificial Intelligence Laboratory, under the supervision of Prof. Barbara Liskov. He received his Master's degree from MIT in 2001, and an undergraduate degree from the IST in 1998. He has won several fellowships and awards, including a best paper award at the 18th ACM Symposium on Operating Systems Principles (SOSP), a special recognition award from MIT's Department of Electrical Engineering and Computer Science, and was awarded the first ERC (starting) grant in Computer Science in Portugal. (http://asc.di.fct.unl.pt/~rodrigo/) (http://asc.di.fct.unl.pt/~rodrigo/)

Host

>Maria Inês Camarate de Campos Lynce de Faria

 
     
     
 

Prof. Maxime Crochemore

Université Paris-Est, France

Repetitions in Strings

2 December 2013

17:00, meeting room @ Av Duque Ávila, 23, Lisboa

Abstract

>Large amounts of text are generated every day in the cyberspace via Web sites, emails, social networks, and other communication networks. These text streams need to be analysed to detect critical events or the monitor business for example. An important characteristics to take into account in this setting is the existence of repetitions in texts. Their study constitutes a fundamental area of combinatorics on words due to major applications to string algorithms, data compression, music analysis, and biological sequences analysis, etc. The talk surveys algorithmic methods used to locate repetitive segments in strings. It discusses the notion of runs that encompasses various types of periodicities considered by different authors, as well as the notion of maximal-exponent factors that captures the most significant repeats occurring in a string. The design and analysis of repeat finders rely on combinatorial properties of words and raise a series of open problems in combinatorics on words.

Bio

>Prof. Maxime Crochemore received his PhD in 1978 and his Doctorat (DSc) in 1983 at the University of Rouen. He got his first professorship position at the University of Paris-Nord in 1985 where he acted as President of the Department of Mathematics and Computer Science for two years. He became professor at the University Paris 7 in 1989 and was involved in the creation of the University of Marne-la-Valle where he is Professor, Emeritus from 2007. He also created the Computer Science research laboratory of this university in 1991 and was the director until 2005. He was Deputy Scientific Director of the Information and Communication Department of CNRS from 2004 to 2006. He was Senior Research Fellow from 2002 to 2007 and is presently Professor at King's College London. Prof. Crochemore's research interests are in the design and analysis of algorithms. His major achievements are on string algorithms, which includes pattern matching, text indexing, coding, and text compression. He also works on the combinatorial background of these subjects and on their applications to bio-informatics. He has co-authored several textbooks on algorithms and published more than 200 articles. He has been the recipient of several French grants on string algorithms and bio-informatics. He participated in a good number of international projects on algorithms and supervised to completion more than twenty PhD students. (http://monge.univ-mlv.fr/~mac/)

Host

>Ana Teresa Correia de Freitas

 
     
     
 

Prof. José Fiadeiro

University of London, UK

A component and an interface algebra for dynamic networks of interactions

14 November 2013

10:00, Anfiteatro EA3

Abstract

>As a result of the global interconnectivity ensured by the Web, the new landscape of systems that are operating in cyber-space is that of networks of systems where execution at the network nodes, which could be triggered by humans or performed by programmed devices, enable the spontaneous evolution of network links: as they execute, applications create a ‘social network’ of their own and use it to procure the resources or services that they need to fulfil their own ‘selfish’ goals. In this talk, we discuss a component and an interface algebra (in the sense of de Alfaro and Henzinger) for such dynamic networks of interactions. The component algebra allows us to reason about global properties of such networks such as consistency (all the processes in the network can agree on a joint trace) and what we call "dynamic consistency" (all the processes in the network can agree on a joint trace no matter what interactions the network receives from its environment). The interface algebra gives us the means to control the way a network can evolve by restricting the interconnections that it can establish with other networks. This is joint work with Antónia Lopes, Faculdade de Ciências da Universidade de Lisboa.

Host

>João Paulo Marques da Silva

 
     
     
 

Dr. Paul Debevec

USC Institute for Creative Technologies, USA

Achieving Photoreal Digital Actors in Film and in Real-Time

21 October 2013

14:30, Instituto Superior Técnico TAGUSPark (Sala 1.38)

Abstract

>Somewhere between "Final Fantasy" and "The Curious Case of Benjamin Button", digital actors crossed the "Uncanny Valley" from looking strangely synthetic to believably real. This talk describes how the Light Stage scanning systems and HDRI lighting techniques developed at the USC Institute for Creative Technologies have helped create digital actors in a range of recent movies and research projects. In particular, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create 2008's "Digital Emily", a collaboration with Image Metrics (now Faceware) yielding one of the first photoreal digital actors, and 2013's "Digital Ira", a collaboration with Activision Inc., yielding the most realistic real-time digital actor to date. The talk includes recent developments in HDRI lighting, polarization difference imaging, and reflectance measurement, and 3D object scanning, and concludes with advances in autostereoscopic 3D displays to enable 3Dis teleconferencing and holographic characters.

Bio

>Paul Debevec is a Research Professor at the University of Southern California and the Associate Director of Graphics Research at USC's Institute for Creative Technologies. From his 1996 P.hD. at UC Berkeley, Debevec's publications and animations have focused on techniques for photogrammetry, image-based rendering, high dynamic range imaging, image-based lighting, appearance measurement, facial animation, and 3D displays. Debevec serves as the Vice President of ACM SIGGRAPH and received a Scientific and Engineering Academy Award® in 2010 for his work on the Light Stage facial capture systems, used in movies including Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, Tron: Legacy, The Avengers, and Oblivion. (http://www.pauldebevec.com/)

Host

>Joaquim Armando Pires Jorge

 
     
     
 

Dr. Paul Bertone

European Bioinformatics Institute, UK

Digital information storage in DNA

4 September 2013

09:00, Congress Center

Abstract

>The amount of information that humans produce and want to store is increasing exponentially. It is estimated that the total digital information on Earth is of the order of zettabytes (thousands of billions of billions of bytes). The amount of digital information that people want to archive, i.e. store safely, recoverably, for long periods of time with only rare access and with minimal ongoing maintenance requirements, is also growing. However, at present essentially no long-term archiving of digital information is taking place. This is because all current digital storage media require a continual cycle of maintenance to renew both the storage medium and the 'reading' and 'writing' hardware. This in turn is because there is no conventional computing storage technology that is trusted to survive more than a few years. Recent genome science-inspired advances in the technologies for reading and writing DNA led us to investigate possibility of using DNA as a digital archive medium. DNA is a stable information carrier, with 10,000-year-old intact sequences routinely recovered from historical samples. Safe DNA storage conditions are easily maintained at low cost, and the ability to read DNA fragments will surely survive for as long as there are technologically-advanced humans inquisitive about the working of living systems. In our proof-of-concept experiment, we showed how existing DNA technologies can be used to store and recover digital information in a manner that could be extrapolated to global data scales, incorporating modern methods such as error correcting codes for data integrity. This talk will describe this experiment, and will speculate on the future of DNA as a digital storage medium. <p><i> This talk is a keynote address in the scope of the conferences <a href="http://ipres2013.ist.utl.pt/">iPRES-2013/DC-2013</a> The conferences require registration but this talk will be open to free audience. </i><p>

Bio

>PhD Yale University, 2005. At EMBL-EBI since 2005. Joint appointments in Genome Biology and Developmental Biology Units. Associate Investigator, Wellcome Trust - Medical Research Council Stem Cell Institute, University of Cambridge (http://www.ebi.ac.uk/about/people/paul-bertone)

Host

>José Luis Brinquete Borbinha

 
     
     
 

Prof. Steve Young

University of Cambridge, UK

Spoken Dialogue Systems: Progress and Challenges

24 June 2013

14:30, Anfiteatro do Complexo Interdisciplinar, IST Alameda

Abstract

>The potential advantages of statistical dialogue systems include lower development cost, increased robustness to noise and the ability to learn on-line so that performance can continue to improve over time. This talk will briefly review the basic principles of statistical dialogue systems including belief tracking and policy representations. Recent developments at Cambridge in the areas of rapid adaptation and on-line learning using Gaussian processes will then be described. The talk will conclude with a discussion of some of the major issues limiting progress.

Bio

>Steve Young received a BA in Electrical Sciences from Cambridge University in 1973 and a PhD in Speech Processing in 1978. He held lectureships at both Manchester and Cambridge Universities before being elected to the Chair of Information Engineering at Cambridge University in 1994. He was a co-founder and Technical Director of Entropic Ltd from 1995 until 1999 when the company was taken over by Microsoft. After a short period as an Architect at Microsoft, he returned full-time to the University in January 2001 where he is now Senior Pro-Vice-Chancellor.<p> His research interests include speech recognition, language modelling, spoken dialogue and multi-media applications. He is the inventor and original author of the HTK Toolkit for building hidden Markov model-based recognition systems (see <a href=" http://htk.eng.cam.ac.uk">http://htk.eng.cam.ac.uk</a>), and with Phil Woodland, he developed the HTK large vocabulary speech recognition system which has figured strongly in DARPA/NIST evaluations since it was first introduced in the early nineties. More recently he has developed statistical dialogue systems and pioneered the use of Partially Observable Markov Decision Processes for modelling them. He also has active research in voice transformation, emotion generation and HMM synthesis.<p> He has written and edited books on software engineering and speech processing, and he has published as author and co-author, more than 250 papers in these areas. He is a Fellow of the Royal Academy of Engineering, the IEEE, the IET and the Royal Society of Arts. He served as the senior editor of Computer Speech and Language from 1993 to 2004 and he was Chair of the IEEE Speech and Language Processing Technical Committee from 2009 to 2011. In 2004, he received an IEEE Signal Processing Society Technical Achievement Award. He was elected ISCA Fellow in 2008 and he was awarded the ISCA Medal for Scientific Achievement in 2010. He is the recipient of the 2013 Eurasip Individual Technical Achievement Award.</p> (http://mi.eng.cam.ac.uk/~sjy/)

Host

>Isabel Maria Martins Trancoso

 
     
     
 

Prof. Daniel Kofman

Telecom ParisTech (ENST), France

An integrated view on future information and communication networks and services.

21 May 2013

18:00, Centro de Congressos, Pavilhão de Civil, IST.<br>(Em cooperação com IEEE ComSoc Portugal).

Abstract

>The talk first presents a vision on future information and communication services and related requirements and challenges. It then shows a - unified - view on major trends enabling the presented services' evolution, including better integrated cloud and networking solutions, future content distribution systems, Internet of Things, Big Data, as well more futuristic concepts like nano-devices and nano-networks. Finally, to illustrate possible applications, a focus is given on the impact of evolved ICT solutions in future "energy services" and on the required evolution of the control plane of smart grids. 21

Bio

>Professor Daniel Kofman is: <ul> <li> Co-founder and Director of the LINCS, a joint research center on Future Communication Networks, Systems and Services, financed by INRIA, Institut Mines-Telecom,, Universite Pierre et Marie Curie and Alcatel-Lucent and sponsored by several other organizations (75 researchers and experts from industry and academia, collocated in the same premises). <li> RAD Data Communications Fellow <li> Strategic Advisor, former corporate Chief Technology Officer (CTO) and member of the Corporate Strategy Committee (CEO, VPs and CTO). <li>Co-founder and Chairman of the Steering Board of the European Network of Excellence "Euro-NGI" (59 partners from academy and industry from 18 countries, more the 200 researchers) and of its successors Euro-FGI and Euro-NF. <li> Member of the Scientific Committee of the French Parliament (24 members covering all the scientific domains) <li> Advisor and Expert for various national and international companies and institutions (European Commission, CSTI: Centra d' Analyses Stratégiques-French Government, etc.) <li>Cofounder of a technological start-up. </ul> (http://perso.telecom-paristech.fr/~kofman/)

Host

>Augusto Julio Domingues Casaca

 
     
     
 

Prof. Maurice Herlihy

Brown University, USA

The Multicore Revolution

18 April 2013

14:30, Anfiteatro do Complexo Interdisciplinar, IST Alameda

Abstract

>Computer architecture is undergoing, if not another revolution, then a vigorous shaking-up. The major chip manufacturers have, for the time being, mostly given up trying to make processors run faster. Instead, they have switched to ``multicore'' architectures, in which multiple processors (cores) communicate directly through shared hardware caches, providing increased concurrency instead of increased clock speed. <p> As a result, system designers and software engineers can no longer rely on increasing clock speed to hide software bloat. Instead, they must somehow learn to make effective use of increasing parallelism. This adaptation will not be easy. Conventional synchronization techniques based on locks and conditions are unlikely to be effective in such a demanding environment. Coarse-grained locks, which protect relatively large amounts of data, do not scale, and fine-grained locks introduce substantial software engineering problems. As a result, the community has increasingly turned to hardare and software models based on atomic transactions. </p><p> This talk will survey the area, with a focus on open research problems.</p> <a href="http://prezi.com/_o4gspffcgm5/lisbon/?auth_key=c333e34bac13879ab3c6cb386eaf3590911176ba&kw=view-_o4gspffcgm5&rc=ref-77718">More Information and Slides</a> <p>Hosts:Luís Rodrigues and João Cachopo</p>

Bio

>Maurice Herlihy has an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He served on the faculty of Carnegie Mellon University, on the staff of DEC Cambridge Research Lab, and is currently Professor in the Computer Science Department at Brown University. He is an ACM Fellow, and is the recipient of the Dijkstra Prize in Distributed Computing in 2003 and in 2012, and the Goedel Prize in theoretical computer science in 2004. He received the W. Wallace McDowell Award in 2013. His 1993 paper inventing`transactional memory won the 2008 ISCA influential paper award. (http://www.cs.brown.edu/~mph/)

Host

>Luís Eduardo Teixeira Rodrigues

 
     
     
 

Prof. Edmund M. Clarke

Carnegie Mellon University, USA

Model Checking and the Curse of Dimensionality

13 March 2013

11:00, Lecture Room EA1, IST Alameda

Abstract

>Model Checking is an automatic verification technique for large state transition systems. It was originally developed for reasoning about finite-state concurrent systems. The technique has been used successfully to debug complex computer hardware and communication protocols. Now, it is beginning to be used for software verification as well. The major disadvantage of the technique is a phenomenon called the State Explosion Problem. This problem is impossible to avoid in worst case. However, by using sophisticated data structures and clever search algorithms, it is now possible to verify state transition systems with astronomical numbers of states.

Bio

>Edmund M. Clarke received a B.A. degree in mathematics from the University of Virginia in 1967, a M.A. degree in mathematics from Duke University in 1968, and a Ph.D. degree in computer science from Cornell in 1976. He taught at Duke University from 1976-1978 and at Harvard University from 1978-1982. Since 1982 he has been on the faculty in the Computer Science Department at Carnegie-Mellon University. In 1995 he became the first recipient of the FORE Systems Professorship, an endowed chair in the School of Computer Science. He was named a University Professor in 2008. <p> Dr. Clarke's interests include software and hardware verification and automatic theorem proving. In 1981 he and a graduate student, Allen Emerson, first proposed the use of Model Checking as a verification technique for finite state concurrent systems. His research group pioneered the use of Model Checking for hardware and software verification. In particular, his research group developed Symbolic Model Checking using BDDs, Bounded Model Checking using fast CNF satisfiability solvers, and pioneered the use of CounterExample-Guided-Abstraction-Refinement (CEGAR). In addition, Clarke and his students developed the first parallel general resolution theorem prover (Parthenon), and the first theorem prover to be based on a symbolic computation system (Analytica). </p><p> Dr. Clarke is one of the founders of the conference on Computer Aided Verification (CAV) and served on its steering committee for many years. He is the former editor-in-chief of Formal Methods in Systems Design. He served on the editorial boards of Distributed Computing, Logic and Computation, and IEEE Transactions in Software Engineering. In 1995 he received a Technical Excellence Award from the Semiconductor Research Corporation. He was a co-recipient of the ACM Kanellakis Award in 1998. In 1999 he received an Allen Newell Award for Excellence in Research from the Carnegie Mellon Computer Science Department. In 2004 he received the IEEE Harry H. Goode Memorial Award. He was elected to the National Academy of Engineering in 2005 for contributions to the formal verification of hardware and software correctness. He was a co-recipient of the 2007 ACM Turing Award for his role in developing Model Checking into a highly effective verification technology, widely adopted in the hardware and software industries. He received the 2008 CADE Herbrand Award for Distinguished Contributions to Automated Reasoning and a 2010 LICS Test-of-Time Award. In 2011 he was elected to the American Academy of Arts and Sciences. He received an Honorary Doctorate from the Vienna University of Technology in 2012. Dr. Clarke is a Fellow of the ACM and the IEEE, and a member of Sigma Xi and Phi Beta Kappa.</p> (http://www.cs.cmu.edu/~emc/)

Host

>João Paulo Marques da Silva

 
     
     
 

Prof. Manuela Veloso

Carnegie Mellon University, USA

Symbiotic Autonomy: Robots, Humans, and the Web

11 February 2013

14:00, Auditório Ávila

Abstract

>We envision ubiquitous autonomous mobile robots that coexist and interact with humans while performing assistance tasks. Such robots are still far from common, as our environments offer great challenges to robust autonomous robot perception, cognition, and action. In this talk, I present symbiotic robot autonomy in which robots are aware of their limitations and proactively ask for help from humans, access the web for missing knowledge, and coordinate with other robots. Such symbiotic autonomy has enabled our CoBot robots to move in our multi-floor buildings performing a variety of service tasks, including escorting visitors, and transporting packages between locations. I will describe CoBot's fully autonomous effective mobile robot indoor localization and navigation algorithms, its human-centered task planning, and its symbiotic interaction with the humans and with the web. I will further discuss our ongoing research on knowledge learning from our speech-based robot interaction with humans. The talk will be illustrated with results and examples from many hours-long runs of the robots in our buildings.

Bio

>Manuela M. Veloso is Herbert A. Simon Professor of Computer Science at Carnegie Mellon University. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of multiagent systems where agents Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is IEEE Fellow, AAAS Fellow, and AAAI Fellow. She is the current President of AAAI. Professor Veloso was recently recognized by the Chinese Academy of Sciences as Einstein Chair Professor. She also received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso is the author of one book on "Planning by Analogical Reasoning" and editor of several other books. She is also an author in over 280 journal articles and conference papers. (http://www.cs.cmu.edu/~mmv/)

Host

>Ana Maria Severino de Almeida e Paiva

 
     
     
 

Prof. Georges Gielen

Katholieke Universiteit Leuven, Belgium

Design reliable electronics in an unreliable world

24 January 2013

11:00, IST Alameda, Anfiteatro EA3 na Torre Norte do IST

Abstract

>Microelectronics have changed the way of life of every individual person in our society. We use and rely more and more upon electronic systems, from communications to multimedia to biomedical and automotive. However, the use of deeply scaled nanometer CMOS technologies brings about significant reliability challenges for the electronics. In particular, the increasing variability during fabricaton and the aging due to phenomena such as bias temperature instability and soft breakdown, cause time-dependent malfunctioning of circuits. To prevent this from happening before the scheduled lifetime of a product, appropriate models and analysis tools are developed for designers to simulate the expected problems. In addition, special design techniques are discussed that adapt the hardware at run time to keep up the requested functionality over the entire lifetime. All problems and solutions will be illustrated with practical examples.

Bio

><p>Georges G.E. GIELEN received the MSc and PhD degrees in Electrical Engineering from the Katholieke Universiteit Leuven, Belgium, in 1986 and 1990, respectively. In 1990, he was appointed as a postdoctoral research assistant and visiting lecturer at the Department of Electrical Engineering and Computer Science of the University of California, Berkeley. From 1991 to 1993, he was a postdoctoral research assistant of the Belgian National Fund of Scientific Research at the ESAT laboratory of the Katholieke Universiteit Leuven. In 1993, he was appointed assistant professor at the Katholieke Universiteit Leuven, where he later promoted to associate professor and finally full professor in 2000. From 2007 till 2012 he was the Head of the Microelectronics and Sensors (MICAS) research division, which included five professors and more than 70 PhD students. Since August 2012 he is the Chair of the Department of Electrical Engineering (ESAT) at the Katholieke Universiteit Leuven. He is al! so the Cha ir of the Leuven ICT (LICT) research center, and the PI coordinator of the Leuven Center of Excellence called CHIPS. </p><p> His research interests are in the design of analog and mixed-signal integrated circuits, and especially in analog and mixed-signal CAD tools and design automation (modeling, simulation and symbolic analysis, analog synthesis, analog layout generation, analog and mixed-signal testing). He is coordinator or partner of several (industrial) research projects in this area, including several European projects (EU, MEDEA/CATRENE, ESA). He has authored or coauthored 7 books and more than 450 papers in edited books, international journals and conference proceedings. He regularly is a member of the Program Committees of international conferences (DAC, ICCAD, ISCAS, DATE, CICC...), and served as General Chair of the DATE conference in 2006 and of the ICCAD conference in 2007. He is currently the Chair of EDAA. He serves regularly as member of editorial boards of international journals (IEEE Transactions on Circuits and Systems, IEEE Transactions on Computer-Aided Design, Springer International Journal on Analog Integrated Circuits and Signal Processing, Elsevier Integration). </p><p> He received the 1995 Best Paper Award in the John Wiley international journal on Circuit Theory and Applications, and was the 1997 Laureate of the Belgian Royal Academy on Sciences, Literature and Arts in the discipline of Engineering. He received the 2000 Alcatel Award from the Belgian National Fund of Scientific Research for his innovative research in telecommunications, and won the DATE 2004 conference Best Paper Award. He served as elected member of the Board of Governors of the IEEE Circuits And Systems (CAS) society, as appointed member of the Board of Governors of the IEEE Council on Electronic Design Automation (CEDA), and as Chairman of the IEEE Benelux CAS Chapter. He served as the President of the IEEE Circuits And Systems (CAS) Society in 2005, and as the Chair of the IEEE Benelux Section in 2009-2010. He was elected DATE Fellow in 2007, and received the IEEE Computer Society Outstanding Contribution Award and the IEEE Circuits and Systems Society Meritorious Ser! vice Award in 2007. He is Fellow of the IEEE since 2002. </p> (http://www.esat.kuleuven.be/micas/index.php?option=com_content&task=view&id=16&Itemid=61)

Host

>Luis Miguel Teixeira D Avila Pinto da Silveira

 
     
     
 

Prof. Dr.-Ing. Juergen Becker

Karlsruhe Institute of Technology - KIT. Dept. Electrical Engineering & Information Technology. Institute for Information Processing - ITIV. Karlsruhe, Germany.

Cyber-physical MPSoC Systems: Future Multi-Core Architectures for reliable Mobility & Technologies

5 December 2012

11:00, Anfiteatro do Complexo Interdisciplinar, IST Alameda

Abstract

><p>The field of embedded electronic systems, nowadays also called cyber-physical systems, is still emerging. A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. Today, a pre-cursor generation of cyber-physical systems can be found in areas as diverse as aerospace, automotive, chemical processes, civil infrastructure, energy, healthcare, manufacturing, transportation, entertainment, and consumer appliances. This generation is often referred to as embedded systems. In embedded systems the emphasis tends to be more on the computational elements, and less on an intense link between the computational and physical elements. </p><p> Multipurpose adaptivity and reliability features are playing more and more of a central role, especially while scaling silicon technologies down according to Moore's benchmarks. Leading processor and mainframe companies are gaining more awareness of reconfigurable computing technologies due to increasing energy and cost constraints. My view is of an "all-win-symbiosis" of future silicon-based processor technologies and reconfigurable circuits/architectures. Dynamic and partial reconfiguration has progressed from academic labs to industry research and development groups, providing high adaptivity for a range of applications and situations. Reliability, failure-redundancy and run-time adaptivity using real-time hardware reconfiguration are important aspects for current and future embedded systems, e.g. for smart mobility in automotive, avionics, railway, etc.. Thus, scalability, as we have experienced for the last 35 years is at its end as we enter new phases of technology and certification within safety-critical application domains. Beyond the capabilities of traditional reconfigurable fabrics (like FPGAs), the so-called Multi-/Many-Core solutions are confirmed on the future semiconductor roadmaps. This requires new solutions for programming and integrating such kind of parallel and heterogeneous architectures and platforms, e.g. especially in safety-critical application domains like automotive, avionics and railway. </p><p> In addition: Nano Era with corresponding circuits/architectures allow for micro-mechanical switches that enable new memory and reconfiguration technologies with the advantage of online chip adaptivity and non-volatility. Transient faults may lead to unreliable information processing as information in nanosized devices is much less. Power consumption and related problems present a challenge where information is processed within a smaller area/volume budget. This includes the consideration of appropriate fault tolerance techniques and especially the discussion of necessary efficient and online self-repairing mechanisms for driving such kind of future silicon and non-silicon based technologies and architectures. </p><p> This keynote will finally discuss in detail the corresponding challenges and specifically outline the promising perspectives for future multi-/many-core as well as dynamically reconfigurable, complex, adaptive and reliable systems-on-chip, for embedded and also general purpose computing systems.</p>

Bio

>Juergen Becker is Full Professor for Embedded Electronic Systems in the department of Electrical Engineering and Information Technology at Universitat Karlsruhe (TH). His actual research is focused on industrial-driven System-on-Chip (SoC) integration with emphasis on adaptivity, e.g. dynamically re-configurable hardware architecture development and application in automotive and communication systems. Prof. Becker is Head of the Institute for Information Processing (ITIV) and Department Director of Electronic Systems and Microsystems (ESM) at the Computer Science Research Center (FZI). From 2001- 2005 he has been Co-Director of the International Department at Universitat Karlsruhe (TH), and from 2002-2008 Associate Editor of the IEEE Transactions on Computers. He is author and co-author of more than 300 scientific papers, and active as general and technical program chair- man of national / international conferences and workshops. He is executive board member of the ger- man IEEE section, Board member of the GI/ITG Technical Committee of Architectures for VLSI Circuits, and Senior Member of the IEEE. Since October 2005 Prof. Becker has been Board Member and Vice-President ("Prorektor") for Studies and Teaching at Universitat Karlsruhe (TH), and from October 2009 - April 2012 Chief Higher Education Officer (CHEO) in the new Karlsruhe Institute of Technology - KIT - the unique merger of a large national research lab in the Helmholtz Society as well as of a prominent state university of Baden-Wuerttemberg in Germany. Since July 2012 Prof. Becker is Secretary General of CLUSTER - the association of 12 leading European technical universities. (http://www.itiv.kit.edu/english/21_53.php)

Host

>Luis Miguel Teixeira D Avila Pinto da Silveira

 
     
     
 

Prof. Eduardo F. Camacho

Dpto. Ingeniería de Sistemas y Automática. Escuela Superior de Ingenieros. Sevilla, Spain.

Control of solar thermal plants

8 November 2012

11:00, IST Alameda, Room EA5

Abstract

>The use of renewable energy, such as solar energy, experienced a great impulse during the second half of the seventies just after the first big oil crisis. At that time economic issues were the most important factors and the interest in these types of processes decreased when the oil prices fell. There is a renewed interest in the use of renewable energies nowadays driven by the need of reducing the high environmental impact produced by the use of fossil energy systems. There are two main drawbacks of energy systems: a) the resulting energy costs are not yet competitive and b) solar energy is not always available when needed. Considerable research efforts are being devoted to techniques which may help to overcome these drawbacks, control is one of those techniques. A thermal solar power plant basically consists of a system where the solar energy is collected, then concentrated and finally transferred to a fluid. The thermal energy of the hot fluid is then used for different purposes such as generating electricity, the desalination of sea water etc. While in other power generating processes, the main source of energy (the fuel) can be manipulated as it is used as the main control variable, in solar energy systems, the main source of power which is solar radiation cannot be manipulated and furthermore it changes in a seasonal and on a daily base acting as a disturbance when considering it from a control point of view. Solar plants have all the characteristics needed for using advanced control strategies able to cope with changing dynamics, (nonlinearities and uncertainties). As fixed PID controllers cannot cope with some of the mentioned problems, they have to be detuned with low gain, producing sluggish responses or if they are tightly tuned they may produce high oscillations when the dynamics of the process vary, due to environmental and/or operating conditions changes. The use of more efficient control strategies resulting in better responses would increase the number of operational hours of the plants. The talk describes the main solar thermal plants, the control problems involved and how control systems can help in increasing their efficiency. Some illustrative examples are given.

Bio

>Eduardo F. Camacho received his doctorate in Electrical engineering from the University of Seville where he is now a full professor of the Department of System Engineering and Automatic Control. He has written the books: Model Predictive Control in the Process industry (1995), Advanced Control of Solar Plants (1997) and Model Predictive Control (1999), (2004 second edition) published by Springer-Verlag, Control e Instrumentación de Procesos Quimicos published by Ed. Sintesis and Control of Dead-time Processes published by Springer-Verlag (2007) and Control of Solar Systems published by Springer Verlag (2011). He has served on various IFAC technical committees and chaired the IFAC publication Committee from 2002-2005. He was the president of the European Control Association (2005-2007) and chaired the IEEE/CSS International Affairs Committee (2003-2006), Chair of the IFAC Policy Committee and a member of the IEEE/CSS Board of Governors. He has acted as evaluator of projects at national and European level and was appointed Manager of the Advanced Production Technology Program of the Spanish National R&D Program (1996-2000). He was one of the Spanish representatives on the Program Committee of the Growth Research program and expert for the Program Committee of the NMP research priority of the European Union. He has carried out review and editorial work for various technical journals and many conferences. At present he is one of the editors of the IFAC journal, Control Engineering Practice, editor at large of the European Journal of Control and subject editor of the journal Optimal Control: Methods and Applications. He was Publication Chair for the IFAC World Congress b.02 and General Chair of the joint IEEE CDC and ECC 2005, and co-general chair of the joint 50th IEEE CDC-ECC 2011. (http://www.esi2.us.es/~eduardo/)

Host

>João Manuel Lage de Miranda Lemos

 
     
     
 

Prof.Barbara Liskov

INESC-ID associates with <a href="http://www.di.fct.unl.pt/difctunl-distinguished-lecture-series">DIFCTUNL DLS</a>

October 2012

00:00

Abstract

>INESC-ID associates with DIFCTUNL DLS for the talk of Prof.Barbara Liskov

Host

>Maria Inês Camarate de Campos Lynce de Faria

 
     
     
 

Dr. Anne-Marie Kermarrec

INRIA Senior Researcher (Directrice de recherche), INRIA-Rennes, FRANCE

WhatsUp : a P2P instant news items recommender

5 September 2012

15:00, Anfitearo Ávila

Abstract

> WhatsUp is an instant news system aimed for a large scale network with no central bottleneck, single point of failure or censorship authority. Users express their opinions about the news items they receive by operating a like-dislike button. WhatsUp's collaborative filtering scheme leverages these opinions to dynamically maintain an implicit social network and ensures that users subsequently receive news that are likely to match their interests. Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests without revealing their profile to other users. News items are disseminated through a heterogeneous epidemic protocol that (a) biases the choice of the targets towards those with similar interests and (b) amplifies the dissemination based on the interest of every actual news item. The push and asymmetric nature of the network created by WhatsUp provides a natural support to limit privacy breaches. The evaluation of through large-scale simulations, a ModelNet emulation on a cluster and a PlanetLab deployment on real traces collected both from Digg as well as from a real survey, show that WhatsUp provides an efficient tradeoff between accuracy and completeness.

Bio

>Before joining INRIA in February 2004, Anne-Marie Kermarrec was with Microsoft Research in Cambridge as a Researcher since March 2000. Before that, she obtained her Ph.D. from the University of Rennes (FRANCE) in October 1996 (thesis). She also spent one year (1996-1997) in the Computer Systems group of Vrije Universiteit in Amsterdam (The Netherlands) in collaboration with Maarten van Steen and Andrew. S. Tanenbaum and was Assistant Professor at the University of Rennes 1 from 1998 to 2000. She defended her "habilitation à diriger les recherches" in December 2002 on large-scale application-level multicast. Anne-Marie Kermarrec' s research interests include: Peer to peer distributed systems, epidemic algorithms, content-based search in large-scale overlay networks, collaborative storage systems, search, collaborative filtering, social networks, Web Science. She was awarded an ERC Starting Grant (GOSSPLE 2008-2013) to work on these topics. (http://www.irisa.fr/asap/?page_id=179)

Host

>Luís Eduardo Teixeira Rodrigues