INESC-ID   Instituto de Engenharia de Sistemas e Computadores Investigação e Desenvolvimento em Lisboa Contact |  Logout

# List All Seminars

[3] with the help of Rapid Learning Methodology in Freehand Sketching (RaLeMeFS), developed by author https://link.springer.com/chapter/10.1007/978-3-319-48496-9_27

This talk introduces HOOVER, a distributed software framework for large-scale, dynamic graph modeling and analysis. HOOVER sits on top of OpenSHMEM, a PGAS programming system, and enables users to plug in application-specific logic while handling all runtime coordination of computation and communication. HOOVER has demonstrated scaling out to 24,576 cores, and is flexible enough to support a wide range of graph-based applications, including infectious disease modeling and anomaly detection. Date: 15-Oct-2018    Time: 11:00:00    Location: 336 On the Self in Selfie Christoph Kirsch University of Salzburg Abstract—Selfie is a self-contained 64-bit, 10-KLOC implementation of (1) a self-compiling compiler written in a tiny subset of C called C* targeting a tiny subset of 64-bit RISC-V called RISC-U, (2) a self-executing RISC-U emulator, (3) a self-hosting hypervisor that virtualizes the emulated RISC-U machine, and (4) a prototypical symbolic execution engine that executes RISC-U symbolically. Selfie can compile, execute, and virtualize itself any number of times in a single invocation of the system given adequate resources. There is also a simple linker, disassembler, debugger, and profiler. C* supports only two data types, uint64_t and uint64_t*, and RISC-U features just 14 instructions, in particular for unsigned arithmetic only, which significantly simplifies reasoning about correctness. Selfie has originally been developed just for educational purposes but has by now become a research platform as well. In this talk, we show how selfie leverages the synergy of integrating compiler, target machine, and hypervisor in one self-referential package while orthogonalizing bootstrapping, virtual and heap memory management, emulated and virtualized concurrency, and even replay debugging and symbolic execution. This is joint work with A. Abyaneh, M. Aigner, S. Arming, C. Barthel, S. Bauer, T. Hütter, A. Kollert, M. Lippautz, C. Mayer, P. Mayer, C. Moesl, S. Oblasser, C. Poncelet, S. Seidl, A. Sokolova, and M. Widmoser. Web Link: http://selfie.cs.uni-salzburg.at Date: 12-Oct-2018    Time: 10:30:00    Location: 336 The Future of Cyber-autonomy David Brumley Carnegie Mellon University Abstract—My vision is to automatically check and defend the world's software from exploitable bugs. In order to achieve this vision, I am building technology, called Mayhem, that shifts the attack/defend game away from the current manual approaches for finding and fixing software security vulnerabilities to a fully autonomous cyber reasoning systems Date: 10-Oct-2018    Time: 13:30:00    Location: Anfiteatro FA1 (Pav. Informática) Improved Maximum Likelihood Decoding using sparse Parity-Check Matrices Tobias Dietz Technische Universität Kaiserslautern Abstract—Maximum-likelihood decoding is an important and powerful tool in communications to obtain the optimal performance of a channel code. Unfortunately, simulating the maximum-likelihood performance of a code is a hard problem whose complexity grows exponentially with the blocklength of the code. In order to optimize the performance, we minimize the number of ones in the underlying parity-check matrix, formulate it as an integer program and give a heuristic algorithm to solve it. Using these minimized matrices, we significantly reduce the runtime of several ML decoders for several codes, resulting in speedups of up to 81% compared to the original matrices. Date: 10-Oct-2018    Time: 11:30:00    Location: 336 Efficient paths in ordinal weighted graphs Luca Schafer Technische Universität Kaiserslautern Abstract—We investigate the single-source-single-destination "shortest" paths problem in acyclic graphs with ordinal weighted arc costs. We define the concepts of ordinal dominance and efficiency for paths and their associated ordinal levels, respectively. Further, we show that the number of ordinally non-dominated paths vectors from the source node to every other node in the graph is polynomially bounded and we propose a polynomial time labeling algorithm for solving the problem of finding the set of ordinally non-dominated paths vectors from source to sink Date: 10-Oct-2018    Time: 10:00:00    Location: 336 Crypto-hardware design for secure applications Erica Tena-Sánchez, F. E. Potestad-Ordóñez University of Seville Abstract—Any electronic devices considered 'secure', and in fact any electronic device handling relevant information, make use of cryptographic services to ensure confidentiality, authentication and integrity of the processed data. These cryptographic engines implement mathematically secure algorithms, however due to leakages from their physical implementations they can reveal sensitive information during computation. This talk will present a brief overview on the design and evaluation of hardware countermeasures against side channel attacks and fault injection attacks that can be deployed on to these devices. Date: 09-Oct-2018    Time: 11:00:00    Location: 336 Interactive Systems based on Electrical Muscle Stimulation Pedro Lopes University of Chicago Abstract—How can interactive devices connect with users in the most immediate and intimate way? This question has driven interactive computing for decades. If we think back to the early days of computing, user and device were quite distant, often located in separate rooms. Then, in the ’70s, personal computers “moved in” with users. In the ’90s, mobile devices moved computing into users’ pockets. More recently, wearables brought computing into constant physical contact with the user’s skin. These transitions proved to be useful: moving closer to users and spending more time with them allowed devices to perceive more of the user, allowing devices to act more personal. The main question that drives my research is: what is the next logical step? How can computing devices become even more personal? Some researchers argue that the next generation of interactive devices will move past the user’s skin, and be directly implanted inside the user’s body. This has already happened in that we have pacemakers, insulin pumps, etc. However, I argue that what we see is not devices moving towards the inside of the user’s body but towards the “interface” of the user’s body they need to address in order to perform their function. This idea holds the key to more immediate and personal communication between device and user. The question is how to increase this immediacy? My approach is to create devices that intentionally borrow parts of the user’s body for input and output, rather than adding more technology to the body. I call this concept “devices that overlap with the user’s body”. I’ll demonstrate my work in which I explored one specific flavor of such devices, i.e., devices that borrow the user’s muscles. In my research, I create computing devices that interact with the user by reading and controlling muscle activity. My devices are based on medical-grade signal generators and electrodes attached to the user’s skin that send electrical impulses to the user’s muscles; these impulses then cause the user’s muscles to contract. While electrical muscle stimulation (EMS) devices have been used to regenerate lost motor functions in rehabilitation medicine since the ’60s, during my PhD I explored EMS as a means for creating interactive systems. My devices form two main categories: (1) Devices that allow users eyes-free access to information by means of their proprioceptive sense, such as a variable, a tool, or a plot. (2) Devices that increase immersion in virtual reality by simulating large forces, such as wind, physical impact, or walls and heavy objects. Date: 26-Sep-2018    Time: 13:30:00    Location: Tagus Park (room TBD), and Alameda (room 0.19 by VC) State-of-the-Art FinFET Technology: An Industry Designer’s Perspective Gonçalo Nogueira Socionext, Inc Abstract—Size scaling of CMOS transistors has been happening for the past 30 years, with technologies like FinFET or FD-SOI being used recently to make up for limitations found in Bulk technology. With TSMC releasing 5nm FinFET in 2019 (with gate lengths of the order of dozens of atoms wide), design and layout are changing significantly from what is seen in older technologies. This seminar addresses the topic of FinFET from an industry designer’s perspective, with the following content: an introduction to FinFET, design and layout with FinFETs, advantages and challenges, and lastly, the expected future of solid state circuits. Date: 19-Sep-2018    Time: 17:00:00    Location: Room EA4 North Tower, Alameda "How Acting Through Autonomous Machines Changes People’s Decision Making" by Celso de Melo Celso Melo US Army Research Lab Abstract—Recent times have seen an emergence of a new breed of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). In this talk, I show that acting through machines changes the way people solve these social dilemmas and I'll present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We further show this happens because programming vehicles to act autonomously causes short-term rewards to become less salient and this leads participants to consider broader societal interests and behave more cooperatively. Our findings also indicate this effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society. Date: 05-Sep-2018    Time: 14:00:00    Location: Room 1.38 - IST Taguspark IT Governance in the Board Room Steven de Haes University of Antwerp Abstract—Disruptive new technologies are increasing and have an important influence on the business we are doing. Previously, the board could delegate, ignore or avoid it, but that is no longer the case. Yet, it seems that 80% of boards of directors are still looking away. Digital transformation seems to be ‘the elephant in the boardroom’. In this session, we will address that boards need to extend their governance accountability, from often a mono-focus on finance and legal as proxy to corporate governance, to include technology and provide digital leadership and organizational capabilities to ensure that the enterprise’s IT sustains and extends the enterprise’s strategies and objectives. Date: 16-Jul-2018    Time: 18:00:00    Location: AM Amphitheater, Alameda Campus, IST A Compiler-based Approach to Mitigate Fault Attacks Using SIMD Instructions Alexander V. Veidenbaum University of California at Irvine Abstract—Today's general-purpose microprocessors support vector (SIMD) instructions. This creates opportunities for developing new compilation approach to mitigate the impact of faults on cryptographic implementations, which is subject of this work. A compiler-based approach is proposed to automatically and selectively apply vectorization in a cryptographic library. This transforms a standard software library into a library with vectorized code that is resistant to glitches. Unlike traditional vectorization for performance, the proposed compilation flow uses the multiple vactor lanes to introduce data redundancy in cryptographic computations. The approach has a low overhead in both code size and execution time. Experimental results show that the proposed approach only generates an average of 26% more dynamic instructions over a series of asymmetric cryptographic algorithms in the Libgcrypt library. Only 0.36% injected faults are undetected by this approach. Date: 28-Jun-2018    Time: 11:00:00    Location: 336 Bridging the design and implementation of distributed systems with program analysis. Ivan Beschastnikh University of British Columbia Abstract—Much of today's software runs in a distributed context: mobile apps communicate with the cloud, web apps interface with complex distributed backends, and cloud-based systems use geo-distribution and replication for performance, scalability, and fault tolerance. However, distributed systems that power most of today's infrastructure pose unique challenges for software developers. For example, reasoning about concurrent activities of system nodes and even understanding the system’s communication topology can be difficult. In this talk I will overview three program analysis techniques developed in my group that address these challenges. First, I will present Dinv, a dynamic analysis technique for inferring likely distributed state properties of distributed systems. By relating state across nodes in the system Dinv infers properties that help reason about system correctness. Second, I will review Dara, a model checker for distributed systems that introduces new techniques to cope with state explosion by combining traditional abstract model checking with dynamic model inference techniques. Finally, I will discuss PGo, a compiler that compiles formal specifications written in PlusCal/TLA+ into runnable distributed system implementations in the Go language. All three projects employ program analysis in the context of distributed systems and aim to bridge the gap between the design and implementations of such systems. Date: 21-Jun-2018    Time: 14:00:00    Location: 336 Bridging Informatics and Biology: case studies from the "field" Daniel Sobral Gulbenkian Science Institute Abstract—In Biology, like in many disciplines, technological advances are generating a flurry of new data. Many researchers in Biology are having a hard time processing and integrating these new massive datasets to obtain biological insights. The Bioinformatics Unit at the IGC is a service facility that provides support to researchers in handling and processing large datasets of biological data, particularly of sequencing data. In this talk, I will give a few examples of user support we've been providing, as well as some of our attempts at empowering the user and increasing autonomy. Date: 14-Jun-2018    Time: 10:00:00    Location: IST Taguspark - Room 1.38 Distributed Search and Recommendation with Profile Diversity Esther Pacitti INRIA & CNRS, University Montpellier Abstract—With the advent of Web 3.0, the Internet of things, and citizen science applications, users are producing bigger and bigger amounts of diverse data, which are stored in a large variety of systems. Since the users’ data spaces are scattered among those independent systems, data sharing becomes a challenging problem. Distributed search and recommendation provides a general solution for data sharing and among its various alternatives, gossip-based approaches are particularly interesting as they provide scalability, dynamicity, autonomy and decentralized control. Generally, in these approaches each participant maintains a cluster of “relevant” users, which are later employed in query processing. However, only considering relevance in the construction of the cluster introduces a significant amount of redundancy among users, which in turn leads to reduced recall. Indeed, when a query is submitted, due to the high similarity among the users in a cluster, the probability of retrieving the same set of relevant items increases, thus limiting the number of distinct results that can be obtained. In this talk I will present the resultant new gossip-based clustering algorithms and validate them through experimental evaluation over four real datasets, we show that taking into account diversity based clustering score enables to obtain major gains in terms of recall. In addition, I will also present same ongoing work on scientific data management carried by Zenith Inria team. Date: 11-Jun-2018    Time: 14:30:00    Location: 336 INVITED TALK - Dr. Amit Kumar Pandey SOFTBANK ROBOTICS EUROPE (SBR) Abstract—Title: End User Expectations From the Social Robotics Revolution: an Industrial Perspective Never before in the history of robotics, robots have been so close to us, in our society. We are ‘evolving’, so as our society, lifestyle and the needs. AI has been with us for decades, and now embodied in robots, penetrating more in our day-to-day life. All these are converging towards creating a smarter eco-system of living, where social robots will coexist with us in harmony, for a smarter, healthier, safer and happier life. Such robots are supposed to be socially intelligent and behave in socially expected and accepted manners. The talk will reinforce that social robots have a range of potential societal applications and hence impacting the education needs and job opportunities as well. The talk will begin with illustrating some of the social robots and highlight what it means to develop a socially intelligent robot, and the associated R&D challenges. This will be followed by some use cases, end user feedback and the market analysis. The talk will conclude with some open challenges ahead, including social and ethical issues and emphasize on the greater need of a bigger and multi-disciplinary effort and eco-system of different stakeholders including policy makers. Date: 08-Jun-2018    Time: 11:00:00    Location: Room 1.38 - IST Taguspark Robot learning from few demonstrations by exploiting the structure and geometry of data Sylvain Calinon EPFL - École Polytechnique Fédérale de Lausanne Abstract—Many human-centered robot applications would benefit from the development of robots that could acquire new movements and skills from human demonstration, and that could reproduce these movements in new situations. From a machine learning perspective, the challenge is to acquire skills from only few interactions with strong generalization demands. It requires the development of intuitive active learning interfaces to acquire meaningful demonstrations, the development of models that can exploit the structure and geometry of the acquired data in an efficient way, and the development of adaptive controllers that can exploit the learned task variations and coordination patterns. The developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, emulation, exploration). I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (prosthetic hand control from tactile array data), or far from us (teleoperation of bimanual robot in deep water). Date: 06-Jun-2018    Time: 11:00:00    Location: IST Alameda - DEI Informática II, room 0.19 Urban Data Management, Analysis and Visualization Claudio Silva New York University Abstract—The large volumes of urban data, along with vastly increased computing power, open up new opportunities to better understand cities. Encouraging success stories show that data can be leveraged to make operations more efficient, inform policies and planning, and improve the quality of life for residents. However, analyzing urban data often requires a staggering amount of work, from identifying relevant datasets, cleaning and integrating them, to performing exploratory analyses and creating predictive models that take into account spatio-temporal processes. Our long-term goal is to enable domain experts to crack the code of cities by freely exploring the vast amounts of urban data. In this talk, we will present methods and systems which combine data management, analytics, and visualization to increase the level of interactivity, scalability, and usability for urban data exploration. We will show practical applications of the novel technology in real applications. This work was supported in part by the National Science Foundation, the Moore-Sloan Data Science Environment at NYU, IBM Faculty Awards, AT&T, NYU Tandon School of Engineering and the NYU Center for Urban Science and Progress. Date: 05-Jun-2018    Time: 15:30:00    Location: DEI Meeting Room Computer Graphics in the Age of AI and Big Data Richard (Hao) Zhang Simon Fraser University Abstract—Computer graphics is traditionally defined as a field which covers all aspects of computer-assisted image synthesis. An introductory class to graphics mainly teaches how to turn an explicit model description including geometric and photometric attributes into one or more images. Under this classical and arguably narrow definition, computer graphics corresponds to a forward'' (synthesis) problem, which is in contrast to computer vision, which traditionally battles with the inverse (analysis) problem. In this talk, I would offer my view of what the NEW computer graphics is, especially in the current age of machine learning and data-driven computing. I will first remind ourselves several well-known data challenges that are unique to graphics problems. Then, by altering the above classical definition of computer graphics, perhaps only slightly, I show that to do the synthesis right, one has to first understand’’ the task and solve various inverse problems. In this sense, graphics and vision are converging, with data and learning playing key roles in both fields. A recurring challenge, however, is a general lack of “Big 3D Data”, which graphics research is expected to address. I will show you a quick sampler of our recent works on data-driven and learning-based syntheses of 3D shapes and virtual scenes. Finally, I want to explore a new perspective for the synthesis problem to mimic a higher-level human capability than pattern recognition and understanding. Date: 05-Jun-2018    Time: 14:30:00    Location: DEI Meeting Room 0.19 Talk Kirk Bresniker Chief Architect and HPE Fellow/VP Abstract—Title: Exaflops, Zettabytes and microseconds – Preparing to capitalize on simultaneous regime change in Computing Exascale supercomputers transforming science and industry, intelligent social infrastructure comprised of tens of billions autonomous agents hosting artificial intelligences devouring Zettabytes of information in real time. Unprecedented opportunities for information technology to precipitate societal transformation, but as we reach the twilight of Moore’s Law, equally unprecedented is the uncertainty on how long conventional approaches will continue to allow sustainable computational growth to match changing demand. I’ll review these motivating factors and then cover The Machine Advanced Development Program I’ve lead at Hewlett Packard Labs is preparing to meet these complex opportunities. Date: 30-May-2018    Time: 12:00:00    Location: 336 Towards Knowledge-Based Decision Support System using Propositional Analysis and Rhetorical Structure Theory Cláudio Duque Universidade de Brasília Abstract—The project's leading objective is to develop a natural language interface for knowledge-based decision support system (KBDSS) using rhetorical structure theory (RST) and propositional analysis. KBDSS is a system that provides specialized expertise (problem-solving) stored as facts, rules, procedures, or in similar structures that can be directly accessed by the user. The idea is to develop an independent module that, based in IRS collections’ texts, generates questions in natural language to help users to find the relevant information in the system. It is a research project basically, but not only, in fields of linguistics, computational linguistics, artificial intelligence, information retrieval, and information science. Date: 18-May-2018    Time: 14:00:00    Location: 020 Hugo Rosa - Using Fuzzy Fingerprints For Cyberbullying Detection in Social Networks Hugo Rosa Inesc-ID Abstract—As cyberbullying becomes more and more frequent in social networks, automatically detecting it and pro-actively acting upon it becomes of the utmost importance. In this work, we study how a recent technique with proven success in similar tasks, Fuzzy Fingerprints, performs when detecting textual cyberbullying in social networks. Despite being commonly treated as binary classification task, we argue that this is in fact a retrieval problem where the only relevant performance is that of retrieving cyberbullying interactions. Experiments show that the Fuzzy Fingerprints slightly outperforms baseline classifiers when tested in a close to real life scenario, where cyberbullying instances are rarer than those without cyberbullying. Date: 18-May-2018    Time: 14:00:00    Location: 336 INVITED TALK - Prof. Pawel Kulakowski Abstract—Nanoscale Communications

The talk will discuss the possible means for nanocommunications, i.e. communication between future nanomachines. An overview of possible approaches will be given, including miniaturization of existing communication devices, building nanomachines from basic blocks and molecular communication motivated by the communication mechanisms already existing in biology. The talk will later focus on the phenomenon of FRET (Foerster Resonance Energy Transfer). FRET can provide viable communication means on nano-distances with propagation delays of few nano-seconds only. The theory of FRET will be introduced, followed by a report on experiments on its performance performed in the last few years. The last part of the talk will present some further simulation studies, showing possible applications of FRET-based nanocommunication.

This talk first establishes the relevance of the mora-basic hypothesis that moras (CVs) are the units underlying all human natural languages. It then spotlights a seemingly mysterious discrepancy in the prevalence of phonological dyslexic populations between the English-speaking world and the Japanese-speaking world: namely as high as 17% for the former and as low as 1% for the latter. On the basis of English dyslexic reading marked by an overproduction of moraic (CV) units in the absence of rhyme (VC) units, the talk will show that the discrepancy is due to differences in prosodic structures between the two languages. For rhyme(VC)-oriented English, readers must depict the unit rhyme through prosodic restructuring from the underlying CV-C (do-g) to rhyme-oriented C-VC (d-og). A failure to do so manifests as phonological dyslexia. For mora(CV)-oriented, rhymeless Japanese, such prosodic restructuring is irrelevant, and phonological dyslexia is largely undetected.

The talk furthermore moves on to exploring possible explanations of a failure in such a prosodic restructuring. From the articulatory phonological point of view, onset consonants are coarticulation of the following vowels. Moras (CVs) are thus formed automatically and essentially free. In contrast, coda consonants are not coarticulation with the preceding vowels. Forming rhymes (VCs) instead requires a temporal-spatial decision load, which a dyslexic mind is unable to bear. Mora inclination is explained accordingly.

Program

9h30New Directions for MRI Hardware and Acquisition, Lawrence Wald
Massachusetts General Hospital A. Martinos Center, Harvard Medical School, Harvard-MIT HST
10h30Brain Microstructure at Ultra-High Fields, Noam Shemesh
Champalimaud Neuroscience Program, Champalimaud Foundation
10h55Enhancing Simultaneous EEG-fMRI in humans: from 3T to 7T, Patrícia Figueiredo
Institute for Systems and Robotics, Instituto Superior Tecnico, Universidade de Lisboa
11h20Cofee Breack
11h35Echo-planar Imaging of Human Brain Function, Physiology and Structure at 7 Tesla: Challenges and Opportunities, Marta Bianciardi
Massachusetts General Hospital, A. Martinos Center for Biomedical Imaging
12h00Reinforcement learning, habits, and tics, Tiago Maia
School of Medicine, Universidade de Lisboa
12h25Modern Optimization in Imaging: Some Recent Highlights, Mário Figueiredo
Telecommunications Institute, Instituto Superior Técnico, Universidade de Lisboa
12h50Closing Remarks
Date: 03-Dec-2015    Time: 09:00:00    Location: IST, anfiteatro Abreu Faro Operation and Design of VHF Self-Oscillating DC-DC Converter with Integrated Transformer Igor M. Filanovsky University of Alberta Abstract—Increasing the operating frequency of the DC-DC converter is a direct way of reducing the size of energy storage elements such as bulky capacitors and inductors, which usually dominate the overall converter size. The challenges arise when the converter operating frequency is increased into Very High Frequency (VHF) range from 30 MHz to 300 MHz; the conventional topologies become impractical. In the literature one can find the converter circuits suitable for operation in this frequency range. Some of them, for the lower end of the band, were breadboard prototyped, yet, their fully integrated realization, to the best of our knowledge, is not known yet.

The VHF converters may be loosely qualified as the circuits with self-oscillating resonant gate drivers. This driver is realized using a separate from load oscillating circuit. In our case the load and feedback circuits are combined using one integrated transformer. Usually an integrated coil is realized using the top metal layer of lowest resistivity.

The layers under this coil can be used to create a transformer secondary without any additional silicon area. The transformer parameters and layout were carefully investigated, and it happens that the high resistance of the secondary is only beneficial in our case: the secondary operates as open circuit, it does not introduce any load in the primary, the description of the circuit operation is simplified, and the oscillation frequency may be evaluated.

In the proposed converter the primary represents a “necessary” passive connection providing the path from the power supply to the capacitive load. The core of the system is the feedback loop. The feedback loop, besides of the secondary, includes a duty cycle detector and a pulse-shaping circuit. The output signals of the pulse-shaping circuit represent two in-phase rectangular pulses driving the gates of power transistors. As usual in the feedback systems, it is the feedback that determines such parameters as, for example, the system stability. For this reason we provide full calculation of signals in the duty cycle detector. We describe the full circuit of the proposed converter and its operation principle. Then the detailed analysis of the signals in the duty cycle detector is given. The recommendations on smooth start-up of the converter power transient are provided. Then we describe the circuit layout. Finally, we discuss the obtained results and outline the direction of further investigation. Date: 23-Nov-2015    Time: 14:00:00    Location: 336 Ramon Llull: From the Ars Magna to Agreement Computing Carles Sierra IIIA-CSIC Abstract—Philosopher Ramon Llull (1232-1316) proposed the Ars Generalis, an argumentative method to persuade the non-Christians of the truth of the Christian faith. Although this effort is obviously futile, Ramon Llull made a seminal contribution to one of the most interesting research topics in multiagent systems: Agreement Computing. He proposed a basic alphabet (later extended by Leibniz, who used numbers) that, by means of combinations would contract a coherent vision of the wold with which everybody would need to agree. In this talk, I will describe Llull’s contributions to Logic, Argumentation and Social Choice, and, time permitting, some of my current work in the area of Agreement Computing. Date: 05-Nov-2015    Time: 15:00:00    Location: IST Taguspark, Porto Salvo, room 0.65 Predictive and Scalable Macro­Molecular Modeling Chandrajit Bajaj University of Texas at Austin Abstract—Most bio­molecular complexes involve three or more molecules, forming macro­molecules consisting of thousands to a million atoms. We consider fast molecular modeling algorithms and data structures to support automated prediction of bimolecular structure assemblies formulating it as the approximate solution of a non­convex geometric optimization problem. The conformation of the macro­molecules with respect to each other are optimized with respect to a hierarchical interface matching score based on molecular energetic potentials ((Lennard­-Jones, Coulombic, generalized Born, Poisson Boltzmann ). The assembly prediction decision procedure involves both search and scoring over very high dimensional spaces, (O(6^n) for n rigid molecules) , and moreover is provably NP-­hard. To make things even more complicated, predicting bio­molecular complexes requires search optimization to include molecular flexibility and induced conformational changes as the assembly interfaces complementarily align. I shall also briefly present fast computation methods which run on commodity multicore CPUs and manycore GPUs. The key idea is to trade off accuracy of pairwise, long­-range atomistic energetics for a higher speed of execution. Our CUDA kernel for GPU acceleration uses a cache­-friendly, recursive and linear- space octree data structure to handle very large molecular structures with up to several million atoms. Based on this CUDA kernel, we utilize a hybrid method which simultaneously exploits both CPU and GPU cores to provide the best performance based on selected parameters of the approximation scheme. Date: 11-Sep-2015    Time: 11:00:00    Location: 02.2 Centro de Congressos IST Lattice-based crypto: parallelization of sieving algorithms on multicore CPUs Artur Mariano Universität Darmstadt Abstract—Quantum computers pose a serious threat to cryptoschemes, since classic schemes like RSA or Diffie-Hellman can be broken in the presence of quantum computers. Lattice-based cryptography stands out as one of the most prominent types of quantum immune cryptography. The main task taken on by cryptographers at this point in time is the assessment of potential attacks against lattice-based schemes, and the developement of schemes which manage to thwart the attacks that are known up until now. In this talk, I will present lattice-based cryptography from a cryptanalysis (aka attack) standpoint. To this end, I will explain what lattices are, which lattice problems are interesting for cryptography and which algorithms are usually used to address these problems. I will then select specific algorithms for the SVP, a particularly relevant problem, and explain in detail how they work and how they can be implemented and parallelized efficiently on shared-memory CPU systems. This is achieved with lock-free data-structures that scale linearly with the number of used cores, and HPC techniques such as data prefeching and memory pools. Date: 23-Jul-2015    Time: 17:30:00    Location: 020 Temporal Information Retrieval – Understanding Time Sensitive Queries Ricardo Campos Instituto Politécnico de Tomar Abstract—The amount of information that is produced every day is growing exponentially. New pages are added, deleted or simply updated at an incredible pace. With this steady increasing growth of the web a huge amount of temporal information has become widely available. This information can be very useful in helping to meet the users’ information needs whenever they include temporal intents. However retrieving the information that meets the user temporal query demands is still an open challenge. The ambiguity of the query is traditionally one of the causes impeding the retrieval of relevant information. This is particularly evident in the case of temporal queries where users tend to be subjective when expressing their intents (e.g., “avatar movie” instead of “avatar movie 2009”). Determining the possible times of the query is therefore of the utmost importance when attempting to achieve more effective results. In this talk, we will describe how to use the information extracted from web page contents to reach this goal. The fact that relevant temporal expressions to the query can be automatically determined, proves its usefulness by paving the way to the emergence of a number of temporal IR applications, which we will survey in this talk. We will specifically focus on GTE-Cluster and GTE-Rank, two temporal applications that stems from this research. Date: 20-Jul-2015    Time: 14:30:00    Location: 336 Distributed Route Aggregation on the Global Network (DRAGON) João Luís Sobrinho Instituto de Telecomunicações (IT) Abstract—The Internet routing system faces serious scalability challenges due to the growing number of IP prefixes that needs to be propagated throughout the network. Although IP prefixes are assigned hierarchically and roughly align with geographic regions, today's Border Gateway Protocol (BGP) and operational practices do not exploit opportunities to aggregate routing information. In the talk, I will present DRAGON, a distributed route-aggregation technique whereby nodes analyze BGP routes across different prefixes to determine which of them can be filtered while respecting the routing policies for forwarding data-packets. DRAGON works with BGP, can be deployed incrementally, and offers incentives for Autonomous Systems (ASs) to upgrade their router software. I will illustrate the design of DRAGON through a number of examples and I will present results of its performance. Experiments with realistic AS-level topologies, assignments of IP prefixes, and routing policies show that DRAGON reduces the number of prefixes in the forwarding-tables of each AS by close to 80% with minimal stretch in the lengths of AS-paths traversed by data-packets. Date: 30-Jun-2015    Time: 10:00:00    Location: 336 Algoritmos Optimizados e Hardware Dedicado para a Codificação de Vídeo Marcelo Porto@Universidade Federal de Pelotas (UFPel), Brasil, Luciano Agostini@Universidade Federal de Pelotas (UFPel), Brasil Abstract—Esta palestra apresenta os principais temas de pesquisa na área de codificação de vídeo que vêm sendo desenvolvidos no Grupo de Arquiteturas e Circuitos Integrados (GACI) da Universidade Federal de Pelotas (UFPel), no Brasil. Dois trabalhos são apresentados e discutidos com maiores detalhes. O primeiro deles trata da redução da largura de banda de memória necessária para a codificação de vídeo através da compressão de quadros de referência. A solução desenvolvida, chamada Double Differential Reference Frame Compressor (DRFC), reduz em cerca de 69% o a largura de banda necessária para a comunicação com a memória externa, proporcionando uma redução no consumo de energia para acesso à memória de cerca de 65%. O segundo trabalho apresenta um esquema para a redução de complexidade na codificação de vídeos 3D no padrão 3D-HEVC. O esquema é baseado em dois algoritmos, Simplified Edge Detection (SED) e Gradient Based Mode One Filter (GMOF). Estes dois algoritmos visam à redução de complexidade na predição intra de mapas de profundidade, sendo que o esquema proposto apresenta redução de 7 a 35% no tempo de processamento, com perdas mínimas na eficiência de codificação. Date: 04-Jun-2015    Time: 16:00:00    Location: 336 Trends in Software Automatic Tests Wintrust Abstract— Automated functional tests allow significant reduction in the validation effort every time the application is updated, thus being suitable for the implementation of regression testing. Additionally, the accuracy and the coverage rate of this type of testing are much higher because it allows for the execution, over a short period of time, of hundreds or thousands of test cases. Finally, another important aspect is that it makes it easier to get metric and management information. Besides automated functional testing, Wintrust is also specialize in executing performance tests (volume, stress and load testing) in pre-production environments, as well as in measuring the response times in production environments. Date: 03-Jun-2015    Time: 10:00:00    Location: IST, anfiteatro GA1 Hardware Security: Challenges, Solutions and Opportunities Chip Hong Chang Nanyang Technological University Abstract—The geographical dispersion of chip design activities, coupled with the heavy reliance on third party hardware intellectual properties (IPs), have led to the infiltration of counterfeit and malicious chips into the integrated circuit (IC) design and fabrication flow. Counterfeit chips (such as unauthorized copies, remarked/recycled dice, overproduced and subverted chips or cloned designs) pose a major threat to all stakeholders in the IC supply chain, from designers, manufacturers, system integrators to end users. The consequence they caused to the electronic equipment and critical infrastructure can be disastrous yet identifying compromised ICs is extremely difficult. New attack scenarios could put the integrated electronics ecosystem in dire peril if nothing is done to avert these hardware security treats. This talk provides an overview of our research effort in hardware security. Constraint-based watermarking and fingerprinting are first introduced as a detection approach to hardware IP copyright protection, which can be augmented by an identity-based signature scheme to enable multiple IP cores marked by different authors in a single chip to be publicly authenticable in the field by the end users. As reusable IPs sold in the form of FPGA configuration bitstreams are vulnerable to cloning, misappropriation, reverse engineering and hardware Trojan (HT) attacks, a pay-per-use liscensing scheme is proposed to assure the secure installation of FPGA IP cores onto contracted devices agreed upon by the IP provider and IP buyer. Side-channel analysis method for HT detection and an active current sensing circuit for fast screening of HT-infected chips will also be presented. The last part of this talk will introduce disorder-based methods to avoid the long-term presence of keys in vulnerable hardware. These methods enable random, unique and physically unclonable device fingerprints to be generated on demand for authentication and other cryptographic applications. The high-quality physical unclonable functions (PUFs) we proposed include the robust RO-PUF for resource-constrained platforms, CMOS image sensor based PUF for sensor-level authentication and the PUFs based on emerging non-volatile memory technologies. Finally, some on-going and future research topics addressing the challenges and opportunities in hardware security will be outlined. Date: 27-May-2015    Time: 11:00:00    Location: Electrical and Computer Engineering Depart. Meeting Room, IST Computing in Space with OpenSPL Georgi Gaydadjiev Chalmers University Abstract—For a long time all atomic arithmetic and storage structures of computing systems were designed as two-dimensional (2D) structures on silicon. Currently processor vendors offer chips with steadily growing numbers of cores and recent circuits started to grow in the third dimension by integrating silicon dies on the top of each other. All of this results in severe increase of the programming complexity. To date, predominately the one-dimensional view of computing systems organization and behavior is used forming a severe obstacle in exploiting all the associated advantages. To enable this, a more natural, at least 2D view of computer systems is required to represent closer the physical reality in both space and time. This calls for radically novel approaches in terms of programming and system design. Computing in space allows designers to express complex mathematical operations in a more natural, space and area aware way and map them on the underlying hardware resources. OpenSPL is one such approach that can be used to partition, lay out and optimize programs at all levels from high-level algorithmic transformations down to individual custom bit manipulations. In addition, the OpenSPL execution model enables highly efficient scheduling (or better called choreography) of all basic computational actions with the guarantee of no side effects. It is clear that this approach requires a new generation of design tools and methods and a novel way to measure (or rate) performance as compared to all traditional practices. In this talk we will address all of the topics relevant to spatial computing and show its enormous capabilities to design power efficient computing systems. Examples and results based on real systems deployed by Maxeler Technologies will emphasize the advantages of this approach but will also stress the difficulties along the road ahead. Date: 01-Apr-2015    Time: 14:00:00    Location: 336 The University of the Future: Taking Knowledge Out of Campus, A Personal Tale Nivio Ziviani Universidade Federal de Minas Gerais Abstract—An important way of wealth generation is the creation of knowledge intensive startups from research results. The objective of this talk is to present the experience of startup creation in the Department of Computer Science of the Universidade Federal de Minas Gerais in Brazil. We will discuss three examples: (i) Miner Technology Group, sold to the group Folha de São Paulo/UOL in June 1999, which is one of the first experiences in spinning off a Web startup company from research conducted at a Brazilian university; (ii) Akwan Information Technologies, which became a successful startup and a reference for web search in Brazil and was acquired by Google Inc. in July 2005—an acquisition from which Google bootstrapped its R&D Center for Latin America, located in Belo Horizonte; and (iii) Zunnit Technologies — a new startup company focused on the convergence of Deep Learning and Big Data aiming to improve knowledge of user behavior, management of multimedia assets, sales leads, or recommendation of items of interest for web users. Date: 27-Mar-2015    Time: 11:30:00    Location: IST Room QA 1.1 (South Tower) Lasp: a language for eventually consistent distributed programming with CRDTs Peter Van Roy@Université catholique de Louvain, Christopher Meiklejohn@Basho Technologies Abstract—We propose Lasp, a new programming language designed to simplify large-scale fault-tolerant distributed programming. Lasp is being developed in the SyncFree European project (syncfree.lip6.fr). It leverages ideas from distributed dataflow extended with convergent replicated data types (CRDTs). This supports computations where not all participants are online together at a given moment. The initial design supports synchronization-free programming by combining CRDTs together with primitives for composing them inspired by functional programming. This lets us write long-lived fault-tolerant distributed applications, including ones with nonmonotonic behavior, in a functional paradigm. The initial prototype is implemented as an Erlang library built on top of the riak-core distributed systems infrastructure, which is based on a ring with consistent hashing. We show how to implement one nontrivial large-scale application, the ad counter scenario from SyncFree. Future extensions of Lasp will focus on efficiency, practicality, and extensions to add synchronization where needed such as explicit causality and mergeable transactions. Date: 13-Mar-2015    Time: 10:00:00    Location: 020 Rethinking reverse converter design: From algorithms to hardware components Amir Sabbagh Molahosseini Islamic Azad University Abstract—In this talk a practical methodology to achieve RNS reverse converters with the desired characteristics based on target application’s constraints is introduced. This procedure can be also applied to other RNS particularly difficult operations such as scaling, sign detection and magnitude comparison to enhance them; resulting in practical and efficient RNS. The presented area-delay-power-aware adder placement procedure broke down into four phases namely: i) moduli set and reverse converter architecture selection, ii) placement using theoretical analysis, iii) implementation, and iv) placement using experimental results. Furthermore, during these phases key notes about hardware design of reverse converter are proposed. The Effectiveness of the proposed placement procedure is shown by using distinct converters and implementation technologies. Date: 04-Mar-2015    Time: 16:00:00    Location: 336 Non-cooperative and Deceptive Dialogue David Traum ICT- University of Southern California (USC) Abstract—Cooperation is usually seen as a central concept in the pragmatics of dialogue. There are a number of accounts of dialogue performance and interpretation that require some notion of cooperation or collaboration as part of the explanatory mechanism of communication (E.g., Grice s maxims, interpretation of indirect speech acts, etc). Most advanced computational work on dialogue systems has also generally assumed cooperativity, and recognizing and conforming to the users intention as central to the success of the dialogue system. In this talk I will review some recent work on modeling non-cooperative dialogue, and the creation of virtual humans who engage in Non-cooperative and deceptive dialogue. These include tactical questioning role-playing agents, who have conditions under which they will reveal truthful or misleading information, and negotiating agents, whose goals may be at odds with a human dialogue participant, and calculate utilities for different dialogue strategies, and also have an ability to keep secrets using plan-based inference to avoid giving clues that would reveal the secret. Date: 20-Feb-2015    Time: 14:00:00    Location: room 0.65 @INESC-ID Taguspark Collaborative Mobile Charging and Coverage in Wireless Sensor Networks Jie Wu Temple University Abstract—The limited battery capacity of sensor nodes has become the biggest impediment to wireless sensor network (WSN) applications over the years. Recent breakthroughs in wireless energy transfer, based on rechargeable lithium batteries, provide a promising application of mobile vehicles. These mobile vehicles act as mobile chargers to transfer energy wirelessly to static sensors in an efficient way. In this talk, we discuss some of our recent results on several charging and coverage problems involving multiple mobile chargers. In collaborative mobile charging, a fixed charging location, called a base station (BS), provides a source of energy to mobile chargers, which in turn are allowed to recharge each other while collaboratively charging static sensors. The objective is to ensure sensor coverage while maximizing the ratio of the amount of payload energy (used to charge sensors) to overhead energy (used to move mobile chargers from one location to another). This is done such that none of the sensors will run out of batteries. Here, sensor coverage spans both dimensions of time and space. We first consider the uniform case, where all sensors consume energy at the same rate, and propose an optimal scheduling scheme that can cover a one-dimensional (1-D) WSN with infinite length. Then, we present several greedy scheduling solutions to 1-D WSNs with non-uniform sensors and 2-D WSNs, both of which are NP-hard. Finally, we study another variation, in which all mobile chargers have batteries of unlimited capacity without resorting to a BS for recharging. The objective is then to deploy and schedule a minimum number of mobile chargers that can cover all sensors. Again, we provide an optimal solution to this problem in a 1-D WSN with uniform sensors and several greedy solutions with competitive approximation ratios to the problem setting of 1-D WSNs with non-uniform sensors and 2-D WSNs, respectively. Date: 12-Feb-2015    Time: 10:00:00    Location: 336 Reading News with Maps by Exploiting Spatial Synonyms HananSamet University of Maryland Abstract—NewsStand is an example application of a general framework to enable people to search for information using a map query interface, where the information results from monitoring the output of over 10,000 RSS news sources and is available for retrieval within minutes of publication. The issues that arise in the design of a system like NewsStand, including the identification of words that correspond to geographic locations, are discussed, and examples are provided of its utility. More details can be found in the video at http://vimeo.com/106352925 which accompanies the cover article' of the October 2014 issue of the Communications of the ACM about NewsStand which can be found at http://tinyurl.com/newsstand-cacm Date: 08-Jan-2015    Time: 11:00:00    Location: DEI / Alameda / Informática II Frequent Sequence Mining in MapReduce* Klaus Berberich Max-Planck-Institute für Informatics Abstract—Frequent sequence mining is a fundamental building block in data mining. While the problem has been intensively studied, existing methods cannot handle datasets consisting of billions of sequences. Datasets of that scale are common in applications such as natural language processing, when computing n-gram statistics over large-scale document collections, and business intelligence, when analyzing sessions of millions of users. In this talk, I will present two methods that we developed recently to mine frequent sequences using MapReduce as a platform for distributed data processing. Suffix-Sigma, as the first method, targets the special case of contiguous sequences such as n-grams. It relies on sorting and aggregating sequence suffixes, leveraging ideas from string processing. MG-FSM, as the second method, identifies also non-contiguous frequent sequences. To this end, it partitions and prepares the input in such way that frequent sequences can be efficiently mined in isolation on each of the resulting partitions using any existing method. Experiments on two large-scale document collections demonstrate that Suffix-Sigma and MG-FSM are substantially more efficient and scalable than alternative approaches. Furthermore, I will discuss extensions of Suffix-Sigma and MG-FSM, for instance, to report only closed or maximal sequences and thus drastically reduce their output. (* Joint INESC-ID/LASIGE Seminar) Klaus Berberich is a Senior Researcher at the Max Planck Institute for Informatics where he coordinates the research area Text + Time Search & Analytics. His research is rooted in Information Retrieval and touches the related areas of Data Management and Data Mining. Klaus has built a time machine -- to search in web archives. More recently, he has worked on frequent sequence mining algorithms for modern platforms such as MapReduce. His ongoing research focuses on (i) novelty & diversity in web archive search; (ii) temporal linking of document collections; (iii) mining document collections for insights about the past, present, and future. Klaus holds a doctoral degree (2010, summa cum laude) and a diploma (2004) in Computer Science from Saarland University. He has served on numerous program committees in his research communities of interest (IR, DB, DM). Date: 04-Dec-2014    Time: 15:30:00    Location: 020 Inside Information - From Martian Meteorites to Mummies Anders Ynnerman Eurographics - The European Association for Computer Graphics Abstract—In the last decades imaging modalities have advanced beyond recognition and data of rapidly increasing size and quality can be captured with high speed. This talk will show how data visualization can be used to provide public visitor venues, such as museums, science centers and zoos with unique interactive learning experiences. By combining data visualization techniques with technologies such as interactive multi-touch tables and intuitive user interfaces, visitors can conduct guided browsing of large volumetric image data. The visitors then themselves become the explorers of the normally invisible interior of unique artifacts and subjects. The talk will take its starting point in the current state-of-the-art in CT and MRI scanning technology. It will then discuss the latest high-quality interactive volume rendering and multi-resolution techniques for large scale data and how they are tailored for use in public spaces. Examples will then be shown of how the inside workings of the human body, exotic animals, natural history subjects, such as the martian meteorite, or even mummies can be explored interactively. The recent mummy installation at the British Museum will be shown and discussed from both a curator and visitor perspective and results from a 3 month trial period in the galleries will be presented. Date: 01-Dec-2014    Time: 14:00:00    Location: 336 The Effect of Streaming Time-shifted TV on TV Consumption Pedro Ferreira Carnegie Mellon University Abstract—How does the introduction of time-shifted TV (TSTV) change the total value of advertising captured by networks? How does the introduction of TSTV change the viewership patterns across viewers and across programs? We analyze the effects of the introduction of TSTV on a large cable operator serving more than 1.5 million users that introduced TSTV in 2012. We use click-stream data from TV remote controls in 2012 and 2013 to analyze short and long term effects of TSTV. We use fixed-effects and differences-in-differences with propensity score matching to obtain our results. We find that the introduction of TSTV does not increase TV viewership. In the short-term TSTV viewership amounts for 6% of the total TV viewership. In the long-run, TSTV viewership accounts for 9% of total viewership. We also find that the concentration of TV consumption across programs increases: the most popular programs are watched disproportionately more in time-shift. Interestingly, the most popular programs are also the ones that lose the most in live viewership. This decrease in total live viewership for the most popular programs decreases the total value networks can appropriate from advertising. Pedro Ferreira is an assistant professor of Information Systems and Management at the Heinz College and at the Department of Engineering and Public Policy, Carnegie Mellon University (CMU). He received a Ph.D. in Telecommunications Policy from CMU and a M.Sc. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Pedro’s research interests lie in two major domains: identifying causal effects in dense network settings, with direct application to understanding the future of the digital media industry, and the evolving role of technology in the economics of education. Currently, he is working on a series of large-scale randomized experiments in network settings looking at identifying the role of peer influence in the consumption of media. Pedro has published in top journals and top peer-reviewed research conferences such as Management Science, Management of Information Systems Quartely and the IEEE Conference on Social Computing. Date: 27-Nov-2014    Time: 14:30:00    Location: 020 Using GPU coprocessor to accelerate computations in 3D MPDATA algorithm Krzysztof Rojek Czestochowa University of Technology Abstract—This talk will address an efficient and portable adaptation of stencil-based 3D MPDATA algorithm to GPU cluster. We propose a performance model, which allows for the efficient distribution of computation across GPU resources. Since MPDATA is strongly memory-bounded, the main challenge of providing a high performance implementation is to reduce GPU global memory transactions. With this purpose, our performance model ensures a comprehensive analysis of transactions based on local memory utilization, sizes of halo areas (ghost zones), data dependencies between and within stencils. The results of analysis performed using the proposed model are number of GPU kernels, distribution of stencils across kernels, as well as sizes of CUDA blocks for each kernel. Date: 27-Nov-2014    Time: 10:00:00    Location: 336 Signature-Free Asynchronous Byzantine Consensus with < n/3 and O(n²) Messages* Michel Raynal Institut Universitaire de France Abstract—This talk presents a new round-based asynchronous consensus algorithm that copes with up to Byzantine processes, where n is the total number of processes. In addition of being signature-free and optimal with respect to the value of t, this algorithm has several noteworthy properties: the expected number of rounds to decide is four, each round is composed of two or three communication steps and involves $O(n²) messages, and a message is composed of a round number plus a single bit. To attain this goal, the consensus algorithm relies on a common coin as defined by Rabin, and a new extremely simple and powerful broadcast abstraction suited to binary values. The main target when designing this algorithm was to obtain a cheap and simple algorithm. This was motivated by the fact that, among the first-class properties, simplicity --albeit sometimes under-estimated or even ignored-- is a major one. *(this is a joint work with Achour Mostéfaoui, and Hamouma Moumen) Date: 14-Nov-2014 Time: 16:00:00 Location: 020 Eventual Leader Election in Evolving Mobile Networks Fabíola Greve Universidade Federal da Bahia (UFBA) Abstract—Many reliable distributed services rely on an eventual leader election to coordinate actions. The eventual leader detector has been proposed as a way to implement such an abstraction. It ensures that, each process in the system will be provided by an unique leader, elected among the set of correct processes in spite of crashes and uncertainties. A number of eventual leader election protocols were suggested. Nonetheless, as soon as we are aware off, no one of these protocols tolerates a free pattern of node mobility. This talk presents a new protocol for this scenario of dynamic and mobile unknown networks. Fabíola Greve é professora Associada da Universidade Federal da Bahia, com trabalhos na área de computação distribuída e confiável. Date: 30-Oct-2014 Time: 11:00:00 Location: 020 Data integration tools for pre-processing biological data Valéria Magalhães Pequeno INESC-ID Lisboa and IST Abstract—The increasing use of Electronic Health Records (EHRs) enables a better analysis of patient data, improving the quality of medical care. EHRs must be processed in order to provide a variety of services to the physician, such as risk classification and summarization. EHRs usually are stored in unstructured text or Excel files containing different data formats and types, missing information, and, sometimes, inconsistent information. Therefore, before analyzing the data, we often need to transform and integrate it. In this presentation, we show some examples of data integration tools that can be used to extract and transform data. As example, we use an Excel file containing exam information regarding patients with ALS (Amyotrophic Lateral Sclerosis). Date: 26-Jun-2014 Time: 14:30:00 Location: 336 The Biodegradation and Surfactants Database Jorge dos Santos Oliveira INESC-ID Lisboa and IST Abstract—The Biodegradation and Surfactants Database (BioSurfDB) is a curated relational information system currently integrating 14 metagenomes, 137 organisms, 73 biodegradation relevant genes, 62 proteins and 6 of their metabolic pathways; 29 documented bioremediation experiments, with specific pollutants treatment efficiencies by surfactant producing organisms; and a 46 biosurfactants curated list, grouped by producing organism, surfactant name and class and reference. Our goal is to gather published and novel information on the identification and characterization of genes involved in Oil Biodegradation and Bioremediation of polluted environments and provide it in a curated way together with a series of computational tools to aid biology studies. Date: 12-Jun-2014 Time: 14:30:00 Location: 336 Data integration tools for pre-processing biological data - CANCELED Valéria Magalhães Pequeno Inesc-ID Abstract—The increasing use of Electronic Health Records (EHRs) enables a better analysis of patient data, improving the quality of medical care. EHRs must be processed in order to provide a variety of services to the physician, such as risk classification and summarization. EHRs usually are stored in unstructured text or Excel files containing different data formats and types, missing information, and, sometimes, inconsistent information. Therefore, before analyzing the data, we often need to transform and integrate it. In this presentation, we show some examples of data integration tools that can be used to extract and transform data. As example, we use an Excel file containing exam information regarding patients with ALS (Amyotrophic Lateral Sclerosis). Date: 29-May-2014 Time: 14:30:00 Location: 336 A Deep Neural Network Approach to Speech Enhancement Chin-Hui Lee Georgia Institute of Technology Abstract—In contrast to the conventional minimum mean square error (MMSE) based noise reduction techniques, we formulate speech enhancement as finding a mapping function between noisy and clean speech signals. In order to be able to handle a wide range of additive noises in real-world situations, a large training set, encompassing many possible combinations of speech and noise types, is first designed. Next a deep neural network (DNN) architecture is employed as a nonlinear regression function to ensure a powerful modeling capability. Several techniques have also been adopted to improve the DNN-based speech enhancement system, including global variance equalization to alleviate the over-smoothing problem of the regression model, and dropout and noise-aware training strategies to further improve the generalization capability of DNNs to unseen noise conditions. Experimental results demonstrate that the proposed framework can achieve significant improvements in both objective and subjective measures over the MMSE based techniques. It is also interesting to observe that the proposed DNN approach can well suppress the highly non-stationary noise, which is tough to handle in general. Furthermore, the resulting DNN model, trained with artificial synthesized data, is also effective in dealing with noisy speech data recorded in real-world scenarios without generating the annoying musical artifact commonly observed in conventional enhancement methods. [ Bio ] Chin-Hui Lee is a professor at School of Electrical and Computer Engineering, Georgia Institute of Technology. Before joining academia in 2001, he had 20 years of industrial experience ending in Bell Laboratories, Murray Hill, New Jersey, as a Distinguished Member of Technical Staff and Director of the Dialogue Systems Research Department. Dr. Lee is a Fellow of the IEEE and a Fellow of ISCA. He has published over 400 papers and 30 patents, and was highly cited for his original contributions with an amazing h-index of 66. He received numerous awards, including the Bell Labs President's Gold Award in 1998. He won the SPS's 2006 Technical Achievement Award for "Exceptional Contributions to the Field of Automatic Speech Recognition". In 2012 he was invited by ICASSP to give a plenary talk on the future of speech recognition. In the same year he was awarded the ISCA Medal in scientific achievement for “pioneering and seminal contributions to the principles and practice of automatic speech and speaker recognition”. Date: 26-May-2014 Time: 15:30:00 Location: QA1.2 (IST Alameda) Advanced techniques for integrated DCDC converters Marcelino Bicho dos Santos, Pedro Alou Cervera Universidad Politécnica de Madrid Abstract—Professor Pedro Alou Cervera will present his research activity in the following areas: - PowerSoC project: european project to fully integrate a dc/dc converter - Advanced techniques to optimize the dynamic response of DC/DC converters Pedro Alou (M’07) was born in Madrid, Spain, in 1970. He received the M.S. and Ph.D. degrees in Electrical Engineering from the Universidad Politécnica de Madrid (UPM), Spain in 1995 and 2004, respectively. He is Professor of this university since 1997. He has been involved in Power Electronics since 1995, participating in more than 40 R&D projects with the industry. He has authored or coauthored over 100 technical papers and holds three patents. Main research interests are in power supply systems, advanced topologies for efficient energy conversion, modeling of power converters, advanced control techniques for high dynamic response, energy management and new semiconductor technologies for power electronics. His research activity is distributed among industrial, aerospace and military projects. Date: 23-May-2014 Time: 09:00:00 Location: 336 FaRM: Fast Remote Memory Aleksandar Dragojevic Microsoft Research Abstract—I will talk about the design and implementation of FaRM, a new main memory distributed computing platform that exploits RDMA communication to improve both latency and throughput by an order of magnitude relative to state of the art main memory systems that use TCP/IP. FaRM exposes the memory of machines in the cluster as a shared address space. Applications can allocate, read, write, and free objects in the address space. They can use distributed transactions to simplify dealing with complex corner cases that do not significantly impact performance. FaRM provides good common-case performance with lock-free reads over RDMA and with support for collocating objects and function shipping to enable the use of efficient single machine transactions. FaRM uses RDMA both to directly access data in the shared address space and for fast messaging and is carefully tuned for the best RDMA performance. We used FaRM to build a key-value store and a graph store similar to Facebooks. They both perform well, for example, a 20-machine cluster can perform 160 million key-value lookups per second with a latency of 31 micro-seconds. Date: 09-May-2014 Time: 14:00:00 Location: 336 Integrative biomarker discovery in neurodegenerative diseases: a survey André Carreiro INESC-ID Lisboa and IST Abstract—Data mining has been widely applied in biomarker discovery, resulting in significant findings of different clinical and biological biomarkers. With developments in technology, from genomics to proteomics analysis, a deluge of data has become available, as well as standardized data repositories. Nonetheless, researchers are still facing important challenges in analyzing the data, especially when considering the complexity of pathways involved in biological processes or diseases. Data from single sources seem unable to explain complex processes, such as the ones involved in brain related disorders, thus rising the need for a more comprehensive perspective. A possible solution relies on data and model integration, where several data types are combined to provide complementary views, which in turn can result in the discovery of previously unknown biomarkers, by unravelling otherwise hidden relationships between data of different sources. In this work, we review the different single-source types of data used for biomarker discovery in neurodegenerative diseases, and then proceed to provide an overview on recent efforts to perform integrative analysis in these disorders, discussing major challenges and advantages. Date: 24-Apr-2014 Time: 14:30:00 Location: 336 Fixed-parameter tractable reductions to SAT Ronald de Haan TU Wien Abstract—Modern propositional satisfiability (SAT) solvers perform extremely well in many practical settings and can be used as an efficient back-end for solving NP-complete problems. However, many fundamental problems in Knowledge Representation and Reasoning are located at the second level of the Polynomial Hierarchy (PH) or even higher, and hence for these problems polynomial-time transformations to SAT are not possible, unless the PH collapses. Recent research shows that in certain cases one can break through these complexity barriers by fixed-parameter tractable (fpt) reductions which exploit structural aspects of problem instances in terms of problem parameters. We develop a general theoretical framework that supports the classification of parameterized problems on whether they admit such an fpt-reduction to SAT or not. We base this framework on its application to concrete reasoning problems from various domains. We develop several parameterized complexity classes to provide evidence that in certain cases such fpt-reductions to SAT are not possible. Moreover, we relate these new classes to existing parameterized complexity classes. Additionally, for problems for which there exists a Turing fpt-reduction to SAT, we develop techniques to provide lower bounds on the number of calls to a SAT solver needed to solve these problems. Date: 02-Apr-2014 Time: 14:00:00 Location: 336 Challenges for embedded systems development: can we have it all? Luigi Carro Universidade Federal do Rio Grande do Sul Abstract—In this talk we discuss the current design challenges for embedded systems, suffering pressures from the market, technology and software development. After discussing the context, we introduce some first steps in the direction of having software productivity with high reliability and low energy dissipation. We present RA3, the Resilient Adaptive Algebraic Architecture, which is capable of adapting parallelism exploitation in a time-deterministic fashion to reduce power consumption, while meeting the existing real-time deadlines. Furthermore, the architecture provides low overhead error correction capabilities, through the use of algebraic properties of the operations it performs. We use two real-time industrial case studies to validate the architecture and to show how the adaptive exploitation works. Finally, we present the results of fault-injection campaigns to show the architecture resilience against soft-errors. Date: 02-Apr-2014 Time: 10:00:00 Location: IST, VA1 (Pavilhão de Civil) Automatic Detection and Correction of Web Application Vulnerabilities using Data Mining to Predict False Positives Ibéria Medeiros Faculdade de Ciências de Universidade de Lisboa Abstract—Web application security is an important problem in today’s internet. A major cause of this status is that many programers do not have adequate knowledge about secure coding, so they leave applications with vulnerabilities. An approach to solve this problem is to use source code static analysis to find these bugs, but these tools are known to report many false positives that make hard the task of correcting the application. This paper explores the use of a hybrid of methods to detect vulnerabilities with less false positives. After an initial step that uses taint analysis to flag candidate vulnerabilities, our approach uses data mining to predict the existence of false positives. This approach reaches a trade-off between two apparently opposite approaches: humans coding the knowledge about vulnerabilities (for taint analysis) versus automatically obtaining that knowledge (with machine learning, for data mining). Given this more precise form of detection, we do au- tomatic code correction by inserting fixes in the source code. The approach was implemented in the WAP tool and an experimental evaluation was performed with a large set of open source PHP applications. The talk will be a dry run of a paper presentation that will be given at the International World Wide Web Conference - WWW 2014. Date: 21-Mar-2014 Time: 16:00:00 Location: 020 Extracting academic data and linked data anonymization Pedro Rijo INESC-ID Lisboa and IST Abstract—Data is becoming more valuable each day as more diverse and rich data sources become available, allowing us to discover knowledge on unprecedented ways. IST uses FénixEdu information system for managing most of internal data. The system contains data about students, teachers, employees, courses, and all major aspects of IST as an organization. Such data may be useful for both external agents and, more importantly, for IST itself to study our academic environment. Data may be used as input for state-of-art IR and KD technologies to extract newer and deeper knowledge about academic agents allowing to solve problems on and to understand better our community. Releasing this kind of data publicly comprises an additional step in what concerns privacy preserving of referred individuals and, as has been shown, simple de-identification may not be enough to achieve such goal. On the other hand we must deal with both internal and external data, on top of an evolving environment, where linked data based approaches can definitely help us to deal with such complexity. In this talk we will discuss a solution for exposing, sharing, and connecting data, information, and knowledge available on IST information system, taking into consideration privacy and anonymity issues. Date: 20-Mar-2014 Time: 14:30:00 Location: 336 Aspects of Geospatial Search and Analysis Dirk Ahlers European Research Consortium for Informatics and Mathematics Abstract—Geography helps us to understand and map the world and to navigate in it. Geospatial data and location references help us to understand spatial relations and characteristics of diverse documents. The talk will discuss some aspects of the development of geospatial search engines such as crawling, geoparsing/extraction, geocoding, and analysis. It will further showcase experiences in developing geospatial search in various settings and countries. An emphasis will be put on recent work in gazetteer analysis, looking into quality indicators for the GeoNames dataset, which is widely used for geoparsing and geocoding. Open questions and possible future work will hopefully start a motivating discussion. Date: 11-Mar-2014 Time: 14:30:00 Location: 336 Network mining based analysis of whole brain functional connectivity André Chambel Departamento de Engenharia Informática Abstract—Mapping the human brain has been a topic of interest for the last few decades. In spite of its incredible complexity it is now possible to map the brain using a combination of advanced data representation and data processing algorithms supported on the huge computational power that is available nowadays. In this work we describe an approach for mapping whole-brain functional connectivity. The starting point of our work is a set of high resolution functional magnetic resonance images (fMRI) obtained with a 7T magnetic field that cover a wider brain volume than usual. The fMRIs are then used to build the so called brain functional connectivity network. These networks extracted from the brain can be represented as graphs, i.e., a set of nodes (regions) and a set of edges connecting such nodes. With the networks represented as graphs we apply network mining techniques to them, namely clustering and modularity algorithms that allow us, for instance, to identify functional modules of the brain. Presumably, the increased resolution will allow to obtain more detailed information and potential to uncover additional structure. Due to the size of the graphs all the algorithms must be optimized in order to minimize the used resources. Date: 06-Mar-2014 Time: 14:30:00 Location: 336 Computational prediction of microRNA targets in plant genomes Manuel Reis Departamento de Engenharia Informática Abstract—MicroRNAs (miRNAs) are important posttranscriptional regulators and act by recognizing and binding to sites in their target messenger RNAs (mRNAs). They are present in nearly all eukaryotes, in particular in plants, where they play important roles in developmental and stress response processes by targeting mRNAs for cleavage or translational repression. MiRNAs have been shown to have a crucial role in gene expression regulation, but so far only a few miRNA targets in plants have been experimentally validated. Based on the number of identified genes, on the number of experimentally validated miRNAs and on the fact that one miRNA often regulates multiple genes, a long list of yet unidentified targets is to be expected. Here, we present a novel miRNA target prediction method for plants, that incorporates an evolutionary approach. With this approach, we intend to understand whether a transcript shows evidence of exhibiting a sequence bias towards either eliciting or avoiding target sites for a particular miRNA. Date: 20-Feb-2014 Time: 14:30:00 Location: 336 Topology-aware placement and load-balancing Emmanuel Jeannot INRIA Abstract—Current generation of NUMA nodes clusters feature multicore and many core processors. Programming such architectures efficiently is a challenge because numerous hardware characteristics have to be taken into account, especially the memory hierarchy. One appealing idea to improve the performance of parallel applications is to decrease their communication costs by matching the communication pattern to the underlying hardware architecture. In this task we detail the algorithm and techniques proposed to achieve such a result. First, we gather both the communication pattern information and the hardware details. Then we compute a relevant reordering of the various process ranks of the application. Finally, those new ranks are used to reduce the communication costs of the application. Then, we developed two load balancers for Charm++ that take into account topology and communication aspects depending on the fact that the application is compute-bound or communication-bound. We show that the proposed load-balancing scheme manages to improve the execution times for the two classes of parallel applications. Date: 12-Feb-2014 Time: 14:30:00 Location: 336 Design and Implementation of a Domain Specific Language for Next Generation Sequence Analysis Paulo Monteiro Departamento de Engenharia Informática Abstract—Next Generation Sequecing (NGS) is a set of molecular biology technologies which generate, at low cost, many millions of short nucleotide reads. Typical datasets consist of tens of millions of reads, with each read comprising 35-500 basepairs (depending on the technology used, different read sizes can be obtained). There are many tools for handing these datasets. However, they must still be combined to build a full analysis pipeline. Current solutions to build these pipelines are Make-like tools which can handle text-files and Unix-like commands. Several GUI-based solutions allow users who are not comfortable with the command line to build and run these pipelines. However, they still operate at the semantic level of Make: file dependencies and transformation commands. Because each problem and each variation on the technology requires a different processing pipeline, it would be impossible to design a single pipeline for every need. This paper aims at the description of a context aware tool that will allow for the first phase of NGS analysis. Date: 06-Feb-2014 Time: 14:30:00 Location: 336 Self-Stabilizing Leader Election in Population Protocols Janna Burman University Paris-South 11 and LRI - Laboratoire de Recherche en Informatique, Orsay, France Abstract—We consider the fundamental problem of self-stabilizing leader election (SSLE) in the model of population protocols. In this model, an unknown number of asynchronous, anonymous and finite state mobile agents interact in pairs over a given communication graph. SSLE has been shown to be impossible in the original model. This impossibility can been circumvented by a modular technique augmenting the system with an oracle - an external module abstracting the added assumption about the system. Fischer and Jiang have proposed solutions to SSLE, for complete communication graphs and rings, using an oracle Ω?, called the eventual leader detector. In this work, we present a solution for arbitrary graphs, using a composition of two copies of Ω?. We also prove that the difficulty comes from the requirement of self-stabilization, by giving a solution without oracle for arbitrary graphs, when an uniform initialization is allowed. Date: 28-Jan-2014 Time: 14:00:00 Location: 336 The Organization of the Retina and Visual System Prof Eduardo Fernandez Instituto de Bioingenieria, Facultad de Medicina, Universidad Miguel Hernandez Abstract—Understanding the organization of the vertebrate retina has been an important research topic in the last years. Anatomic descriptions of the cell types that constitute the retina, and the understanding of the role of those cells in combination with psychophysical studies, have contributed to understand how the retina might be organized and functioning. In this talk, Prof. Eduardo Fernandez will present the most recent advances in understanding the visual system, and its applications to impaired people, namely to develop BCIs and humanoid robots. Date: 20-Dec-2013 Time: 14:30:00 Location: 220 Application of RNS to Cryptography Prof Jean-Claude Bajard Univ. Paris VI (Pierre et Marie Currie) Abstract—Residue Number Systems (RNS) are effective number representation, with several advantages in comparison with weighted number systems, namely for DSP and cryptography. In this talk it will be presented the last research results of the application of RNS to public-key cryptography, namely to compute the Montgomery exponentiation. Date: 18-Dec-2013 Time: 17:00:00 Location: 220 Everything you always wanted to know about worst-case (but were afraid to ask) ... Helmut Graeb Technische Universitaet Muenchen Abstract—Process corners, corner cases, worst case parameter sets, ...; there are a lot of myths about certain parameter sets that are supposed to capture some kind of measure for variability of a circuit manufactured in a semiconductor technology. But what are these corners really? How are they determined? How should the results of a worst-case simulation be interpreted? And how can I get an estimation of the yield, more specifically, the parametric yield? These are questions that every designer of analog and mixed-signal circuits is confronted with in his every-day life of designing complex circuits in ever-advancing technologies with ever-increasing transistor variability. The first part of the talk will give some answers. Constraints are key elements of analog design automation: a mathematical optimization tool would not be applicable if it would not be provided with constraints to keep transistors in saturation, to take care of symmetrical sizing, for instance. Interestingly, the netlist of an analog circuit inherently can provide a lot of constraints. The second part of the talk presents a method to automatically extract constraints out of a given netlist. It consists of two parts. First, an analysis of the hierarchical structure of a circuit is described. Second, a signal path analysis is presented. The overall outcome are constraints for sizing and placement, as well as a construction plan for analog placement. It will be illustrated how to use this outcome in sizing and placement of analog circuits. Bio: Helmut Graeb got his Dipl.-Ing., Dr.-Ing., and habilitation degrees from Technische Universitaet Muenchen in 1986, 1993 and 2008, respectively. He was with Siemens Corporation, Munich, from 1986 to 1987, where he was involved in the design of DRAMs. Since 1987, he has been with the Institute of Electronic Design Automation, TUM, where he has been the head of a research group since 1993. His research interests are in design automation for analog and mixed-signal circuits, with particular emphasis on Pareto optimization of analog circuits considering parameter tolerances, analog design for yield and reliability, hierarchical sizing of analog circuits, analog/mixed signal test design, discrete sizing of analog circuits, structural analysis of analog and digital circuits, and analog layout synthesis. Dr. Graeb has, for instance, served as a Member of the Executive Committee of the ICCAD conference, as a Member or Chair of the Analog Program Subcommittees of the ICCAD, DAC, and D.A.T.E conferences, as Associate Editor of the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART II: ANALOG AND DIGITAL SIGNAL PROCESSING and IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, and as a Member of the Technical Advisory Board of MunEDA GmbH Munich, which he co-founded. He is a Senior Member of IEEE (CAS) and member of VDE (ITG). He was the recipient of the 2008 prize of the Information Technology Society (ITG) of the Association for Electrical, Electronic and Information Technologies (VDE), of the 2004 Best Teaching Award of the TUM EE Faculty Students Association, of the 3rd prize of the 1996 Munich Business Plan Contest. Date: 12-Dec-2013 Time: 10:30:00 Location: room EA3, Torre Norte do IST A data mining approach to study disease presentation patterns in Primary Progressive Aphasia. Telma Pereira Departamento de Engenharia Informática Abstract—Nowadays the world is faced with an ageing population and the related challenges, as healthcare issues given the current incidence of diseases more prevalent in elders, such as neurodegenerative diseases. Primary Progressive Aphasia (PPA) is a neurodegenerative disease characterized by a gradual dissolution of language abilities, being these patients regarded with special attention since they possess higher risk to evolve to dementia. Consequently, discovering the different subtypes of PPA patients is fundamental to the timely administration of pharmaceutics and therapeutic interventions, improving patients quality of life. This thesis aims to propose a data mining approach to extract relevant knowledge from clinical data, namely to learn the variants of PPA. Initially, standard clustering algorithms were applied with the purpose of studying the number of groups existent in the dataset and eventually, study the potential existence of new groups, different from the PPA subtypes already defined in the literature. Then, during a second phase, supervised learning techniques were used to analyze patients according to their clinical classification in one of the three PPA variants and develop a new and accurate classification model. The unsupervised learning analysis pointed to the existence of two main groups in the dataset analyzed in this work. This study included the evaluation of diverse sets of attributes in order to access which type/set of attributes produced better results. Finally, two new methodologies for classifying patients with PPA were developed, reaching good accuracies in the dataset under study. One of those methodologies enables the identification of instances which are (potentially) not from any of the already defined three PPA subtypes. Date: 05-Dec-2013 Time: 14:30:00 Location: 336 Some results with MWMR registers by H. Fauconnier University Paris Diderot Abstract—Abstract: What is the number of registers required to solve a task? Many years ago, Ellen and al. have proved a lower bound of square root of n registers to (obstruction free) solve the consensus, but today there is no known consensus algorithm using less than n registers. In a system of n processes, if each process has its own SWMR register, it is possible to emulate any number of registers, but what of tasks can be solved with less than n registers? Before considering this question, what’s happens when we only have MWMR registers? A trivial way may be to assign each process one MWMR: given an array C of MWMR registers, C[i] will be assigned to process i. But if the n processes have ids drawn from a very large set of N identifiers, the size of C depends on N not on n. Renaming algorithms may help but they use a non linear (on n) number of MWMR registers. We give a solution without renaming that implements for each process a SWMR register using only n MWMR registers. This implementation is only non-blocking, but we get with 2(n-1) MWMR a wait-free implementation. Moreover we prove that n is a lower bound to such implementation. We also prove that n MWMR registers are sufficient to solve any wait-free task solvable with any number of (MWMR or SWMR) registers. If the number of MWMR is less than n, we prove that some tasks may nevertheless been (obstruction-free) solved. For example, we prove that 2 registers are necessary and sufficient to (Obstruction-Free) solve the set-agreement problem. This a joint work with C. Delporte, H. Fauconnier, E. Gafni and S. Rajbaum (ICDCN 2013). A recent extension to the adaptive case has been made jointly with L. Lamport ( DISC 2013) Bio: H. Fauconnier received his Ph.D. in 1982 and HDR degree in 2001 in Computer Science from the University Paris-Diderot, after Master degrees in Mathematics and Computer Science. He is a top level expert in Fault Tolerance Distributed Computing and he has published papers in many journals like JACM, Distributed Computing, TOPLAS and in top level conferences of this aera (PODC, DISC, DSN, ICDCS, ...). He has been program committee members of established conferences in Distributed Computing such as PODC, DISC, IEEEE ICDCS, OPODIS...He is currently at LIAFA, University Paris Diderot. Date: 04-Dec-2013 Time: 11:00:00 Location: 336 Towards face-to-face conversations with social robots Joakim Gustafson KTH Abstract—Bio: Joakim Gustafson, Professor in speech technology at KTH, has been a prolific researcher on multimodal dialogue systems since 1993. He has an industrial background from TeliaSonera where, in addition to research, he was involved in the launching of public speech applications. Gustafson’s research activities covers design and development of multimodal conversational systems, interactional analysis of spontaneous spoken phenomena, conversational phenomena in speech synthesis, development of speech-enabled robots and data collections of human-computer interactions in public spaces. He has participated in several EU-projects such as Onomastica, NICE, MonAmi, IURO, GetHomeSafe and SpeDial. He is currently the principal investigator in two nationally funded three-year research projects: Incremental Text-To-Speech Conversion and Situated Audio Visual Interaction with Robots, He is also member of the Editorial Board of the journal Speech Communication. Date: 03-Dec-2013 Time: 16:00:00 Location: 336 Evaluating differential gene expression using RNA-sequencing data: a case study in host-pathogen interaction upon Listeria monocytogenes infection Joana Cruz Departamento de Engenharia Informática Abstract—Unlike the genome, the cell transcriptome is dynamic and specific for a given cell developmental stage or physiological condition. Understanding the transcriptome is essential for interpreting the functional elements of the genome and revealing the molecular constituents of cells. Recently, developments of high-throughput DNA sequencing methodologies have provided a new method to sequence RNA at unprecedented high resolutions. This method is termed RNA-Seq and has been emerging as the preferred technology for both characterization and quantification of the cell transcripts. Bearing this in mind, in this thesis I propose a bioinformatics pipeline to compare two RNA-Seq samples. This pipeline permits biological insight into the analysed samples, by extracting the main biological processes that are differentially active among the samples in analysis. Subsequent to this pipeline, I developed a novel methodology to inspect the activation of a given cellular pathway in a time-course RNA-Seq dataset. The evaluation of a Listeria monocytogenes RNA-Seq dataset with the developed tools testified its proper functioning. It was possible to identify global changes in the human host transcriptome and associate these changes to different stages of the Listeria monocytogenes infection lifecycle. Date: 28-Nov-2013 Time: 14:30:00 Location: 336 MetaGen-FRAME Miguel Coimbra Departamento de Engenharia Informática Abstract—Metagenomics is the study of metagenomes, unprocessed genetic material residing in the most varied sites, without separation into individual organisms. Metagenomic approaches to the study of biological communities are quickly changing our understanding of the function and inter-relationships among living organisms in ecosystems. The rapid advances in metagenomics are largely due to the hasty development of high throughput platforms for deoxyribonucleic acid (DNA) sequencing, that need to be accompanied by significant advances in data analysis techniques. With this work, I intended to develop and apply new techniques for data analysis that can be applied to large amounts of data generated by metagenomics. This document presents a proposal to address the challenges posed by the storage and manipulation of such information types and the need to develop new data analysis techniques that can be applied directly to this problem. For this purpose, there was an intention to harness the power of parallel computing. The target-result of this thesis was MetaGen-FRAME, a metagenomic framework capable of handling heterogeneous data types (from DNA sequences to genome, proteome and metabolome annotations) though the use of different data structures and computational approaches. Date: 31-Oct-2013 Time: 14:30:00 Location: 336 Unsupervised semantic structure discovery for audio Bhiksha Raj Carnegie Mellon University Abstract—Automatic deduction of semantic event sequences from multimedia requires awareness of context, which in turn requires processing sequences of audiovisual scenes. Most non-speech audio databases, however, are not labeled at a sub-file level, and obtaining (acoustic or semantic) annotations for sub-file sound segments is likely to be expensive. In our work, we introduce a novel latent hierarchical structure that attempts to leverage weakly or un labeled data to process the observed acoustics to infer semantic import at various levels. The higher layers in the hierarchical structure of our model represent increasingly higher level semantics. Date: 31-Oct-2013 Time: 13:00:00 Location: 020 On Multi-class Classification Problems Using Genetic Programming Vijay Ingalalli Departamento de Engenharia Informática Abstract—Genetic Programming (GP) is a field under the hood of Evolutionary Computing, that has been successful in addressing a variety of problems in the field of data mining and machine learning, notexcluding the problems of multi-class classification (mcc). However, its realms have been successful only in extending the binary GP classifiers to the problems of mcc, thereof still retaining a void of not having any efficient multi-class classifiers, when compared to non-GP classifiers. In this work, I will present a novel algorithm that incorporates some ideas on the representation of the solution space for a tree based GP, that will lay some foundations on filling this void, which might also lead to some future research in this direction. During the presentation, I shall reveal the success and competitiveness of this approach, and discuss about the future directions. Date: 24-Oct-2013 Time: 14:30:00 Location: 336 Tracking attention to issues as a way to learn about political systems: An Introduction to the Comparative Agendas Project Enrico Borghetto Universidade Nova de Lisboa Abstract—The importance of studying political agendas - the list of issues political actors devote attention to - cannot be overstated. Attention is a scarce resource in politics and, at the same time, it is a precondition for every kind of political action. The Comparative Agendas Project (CAP), a network comprising 18 universities from different member states, developed a distinct methodological approach to study the quantitative flow of issue attention through time and institutions. This approach allows measuring streams of influence and power within single political systems and, at the same time, systematically compare the workings of different political systems. This presentation will provide an overview over the theoretical and methodological approach adopted by the CAP project, as well as other more recent spin-off projects. The analysis of Portuguese agendas is about to kick off in the coming months and there is no better moment to develop new ideas on how to contribute and exploit the massive amount of text data available. Date: 23-Oct-2013 Time: 14:00:00 Location: 336 Quick Hyper-Volume Luis Russo Departamento de Engenharia Informática Abstract—I will present a new algorithm to calculate exact hypervolumes. Given a set of$d\$-dimensional points, it computes the hypervolume of the dominated space. Determining this value is an important subroutine of Multiobjective Evolutionary Algorithms (MOEAs). We analyze the Quick Hypervolume QHV algorithm theoretically and experimentally. The theoretical results are a significant contribution to the current state of the art. Moreover the experimental performance is also very competitive, compared with existing exact hypervolume algorithms. Date: 10-Oct-2013    Time: 14:30:00    Location: 336 Distributed Computations Using Local Broadcasts Fabian Kuhn University of Freiburg Abstract—We discuss basic distributed computation and information dissemination tasks in networks where as the basic communication primitive, nodes can locally broadcast a bounded sized message to all their neighbors. Such a communication assumption is natural in wireless settings and it is particularly suited to study dynamic networks and networks with unidirectional links. For directed networks, we show that even if the network has diameter 2, as long as this fact is not known to the nodes, computing even simple functions such as the minimum of a bunch of values requires time of order essentially sqrt{n}, where n is the number of nodes of the network. We also review recent results on the complexity of such basic data aggregation tasks and of simple information dissemination tasks in dynamic networks. Finally, we discuss some novel results showing that in ordinary static, undirected networks, the achievable throughput when performing multiple network-wide broadcasts is tightly connected to the vertex connectivity of the network graph. Date: 07-Oct-2013    Time: 11:00:00    Location: 336 Parallel efficient alignment of reads for re-sequencing applications Miguel Coimbra Departamento de Engenharia Informática Abstract—In bioinformatics, in the context of resequencing projects, the e cient and accurate mapping of reads to a reference genome is a critical problem. One instance of this problem is the local alignment of pyrosequencing reads produced by the 454 GS FLX system against a reference sequence, an instance for which the software tool TAPyR (Tool for the Alignment of Pyrosequencing Reads) was developed. TAPyR implements a methodology to e ciently solve this problem, which proved to yield results of a quality (both in terms of content and execution speed) higher than those of mainstream applications. With the goal of further improving this platforms results, we produced a parallel implementation of the query and reference sequence access procedures of the original version. Through the use of multithreading, this new version, P-TAPyR, produces considerable reductions in the processing time of queries, scaling with the amount of hardware-supported threads (not accounting for hyper-threading) available. For larger data sets, we were able to observe running times roughly 26 times faster than serial execution with 30 executing threads, showing an experimental (progressively-decreasing) execution serial fraction of 0.8% (determined by the Karp-Rabin Metric described in a posterior section). Herein we present the modi cations made to this software tool to allow for parallel querying of reads against an indexed reference which, scales proportionally to the amount of available physical cores. Date: 26-Sep-2013    Time: 11:00:00    Location: 336 Incremental Maintenance of RDF Views of Relational Data Vânia Vidal Universidade Federal do Ceará Abstract—Professor Vânia Vidal will present an incremental maintenance strategy, based on rules, for RDF views defined on top of relational data. The first step relies on the designer to specify a mapping between the relational schema and a target ontology and results in a specification of how to represent relational schema concepts in terms of RDF classes and properties of the designer’s choice. Using the mappings obtained in the first step, the second step automatically generates the rules required for the incremental maintenance of the view. Date: 17-Sep-2013    Time: 14:00:00    Location: 336 Coupling Pattern Recognition and Signal Processing Ahmed Hussen Abdelaziz Institut für Kommunikationsakustik, Ruhr-Universität Bochum Abstract—Signal processing and pattern recognition are often treated as separate problems. However, tight coupling between them can yield a significantly improved performance in both of these tasks. In this talk, we will introduce two new approaches for such a stronger coupling, providing more precise input from signal processing to pattern recognition and vice versa. We start with coupling pattern recognition models with signal processing algorithms using a new statistical model, called the twin hidden Markov model (THMM), for speech enhancement. By using the THMM, hidden Markov models HMMs can be exploited to enhance speech signals in a recognize-and-synthesize scheme by using the most appropriate features in both recognition and synthesis. After that, we introduce a new approach for coupling signal processing with pattern recognition, called significance decoding (SD). The SD approach is a new uncertainty-of-observation technique that deploys the features uncertainties estimated by the signal processing algorithm to improve the recognition accuracy of automatic speech recognition under tough environmental conditions. Finally, we combine these two schemes in the context of audio-visual speech recognition in order to enhance its performance in very noisy environments. Date: 19-Jul-2013    Time: 15:00:00    Location: 020 Identification of Hybrid Time-varying Parameter systems with Particle Filtering and Expectation Maximization Andras Hartmann Departamento de Engenharia Informática Abstract—Abstract:One limiting assumption of many mathematical models for dynamic systems is that the parameters of the system do not change during the observation period, which however does not necessary hold in many cases. This is typical for biological and medical systems, where we observe a high intra-individual variability in the model parameters. Hybrid time-varying parameter framework is able to capture the changes of parameters that may represent the change of state of the individual, for example in HIV infected patients, changes of conditions in regulatory metabolic networks or diauxic bacterial growth on mixed sugar medium. Thus, in these scenarios, a subset or even all the parameters have to be treated as time-varying in order to capture the dynamics of the system. An offline (batch) algorithm that combines particle filtering and the expectation maximization is introduced for the identification of such systems. The efficiency of the proposed method is illustrated through simulated and real-world examples. Date: 19-Jul-2013    Time: 11:00:00    Location: 336 The Brazilian National Institute of Science and Technology for the Web: Towards a Better Understanding of Web Data Alberto H. F. Laender Universidade Federal de Minas Gerais Abstract—The National Institute of Science and Technology for the Web – InWeb is a multi-institutional project supported by the Brazilian Ministry of Science, Technology and Innovation that aims to develop models, algorithms and novel technology to make information distribution and services through the Web more effective and safe. In this talk, we will provide an overview of InWeb activities by describing its main research lines and some ongoing work. Date: 03-Jul-2013    Time: 11:00:00    Location: 407 Deterministic Scheduling for Replicated Systems Franz Hauck Ulm University Abstract— Deterministic scheduling is a strong requirement for most replication-based systems, as they require a deterministic behaviour of replicas and one source of indeterminism is the scheduling of multiple threads. Often scheduling is avoided at all by disallowing multiple concurrent threads. For modern multi-core hardware this is a waste of resources. A few algorithms for a deterministic user-level scheduling were developed, e.g., LSA, PDS and MAT. Unfortunately they all have their killer application at which they perform worst. In the talk I will introduce the problems behind deterministic scheduling and sketch the potential design space of different schedulers. Our aim is to develop an adaptive scheduler that takes application behaviour into account. Finally, I will briefly introduce some of our other work items in the context of fault-tolerant computing, e.g. the Virtual Nodes framework, the Dj deterministic Java runtime, and the COSCA PaaS platform. Date: 03-Jul-2013    Time: 10:00:00    Location: 020 Spoken Dialogue Systems: Progress and Challenges Steve Young University of Cambridge Abstract—The potential advantages of statistical dialogue systems include lower development cost, increased robustness to noise and the ability to learn on-line so that performance can continue to improve over time. This talk will briefly review the basic principles of statistical dialogue systems including belief tracking and policy representations. Recent developments at Cambridge in the areas of rapid adaptation and on-line learning using Gaussian processes will then be described. The talk will conclude with a discussion of some of the major issues limiting progress. Bio: Steve Young received a BA in Electrical Sciences from Cambridge University in 1973 and a PhD in Speech Processing in 1978. He held lectureships at both Manchester and Cambridge Universities before being elected to the Chair of Information Engineering at Cambridge University in 1994. He was a co-founder and Technical Director of Entropic Ltd from 1995 until 1999 when the company was taken over by Microsoft. After a short period as an Architect at Microsoft, he returned full-time to the University in January 2001 where he is now Senior Pro-Vice-Chancellor. His research interests include speech recognition, language modelling, spoken dialogue and multi-media applications. He is the inventor and original author of the HTK Toolkit for building hidden Markov model-based recognition systems (see http://htk.eng.cam.ac.uk), and with Phil Woodland, he developed the HTK large vocabulary speech recognition system which has figured strongly in DARPA/NIST evaluations since it was first introduced in the early nineties. More recently he has developed statistical dialogue systems and pioneered the use of Partially Observable Markov Decision Processes for modelling them. He also has active research in voice transformation, emotion generation and HMM synthesis. He has written and edited books on software engineering and speech processing, and he has published as author and co-author, more than 250 papers in these areas. He is a Fellow of the Royal Academy of Engineering, the IEEE, the IET and the Royal Society of Arts. He served as the senior editor of Computer Speech and Language from 1993 to 2004 and he was Chair of the IEEE Speech and Language Processing Technical Committee from 2009 to 2011. In 2004, he received an IEEE Signal Processing Society Technical Achievement Award. He was elected ISCA Fellow in 2008 and he was awarded the ISCA Medal for Scientific Achievement in 2010. He is the recipient of the 2013 Eurasip Individual Technical Achievement Award. Date: 24-Jun-2013    Time: 14:30:00    Location: Anfiteatro do Pavilhão Interdisciplinar, IST Alameda Unravelling communities of ALS patients using network mining André Carreiro Departamento de Engenharia Informática Abstract—Amyotrophic Lateral Sclerosis is a devastating neurodegenerative disease characterized by a usually fast progression of muscular denervation, generally leading to death in a few years from onset. In this context, any significant improvement of the patients life expectancy and quality is of major relevance. Several studies have been made to address problems such as ALS diagnosis, and more recently, prognosis. However, these analysis have been mostly restricted to classical statistical approaches used to find the most associated features to a given outcome of interest. In this work we explore an innovative approach to the analysis of clinical data characterized by multivariate time series. We use a distance measure between patients as a reflection of their relationship, to build a patients network, which in turn can be studied from a modularity point of view, in order to search for communities, or groups of similar patients. The preliminary results show that it is possible to extract relevant information from such groups, each presenting a particular behavior for some of the features (patient characteristics) under analysis. Date: 21-Jun-2013    Time: 11:00:00    Location: 336 Host-pathogen interaction upon infection with Listeria using NGS techniques Joana Cruz Departamento de Engenharia Informática Abstract—Listeria monocytogenes is a model bacterial pathogen whose, after internalization, is capable of disrupting a double-membrane vacuole, replicate in the host cytosol and manipulate the innate response triggered in the cytosol. Its intracellular lifecycle in the human host provides insight into the dynamics of general host-pathogen interactions. The identification of host sequences affected during these interactions is paramount to our understanding of how pathogens engineer their cellular environments. The main goal of this project is, therefore, to comprehend in which way pathogens are influencing human host cells, by identifying global changes in the host transcriptome and characterizing the alterations in host nuclear architecture. Furthermore, it is aimed to associate these changes to different stages of the Listeria monocytogenes infection lifecycle. For that, total RNA was extracted from three different cell populations at four time-points (after 20, 60, 120 and 240 minutes) with the purpose of having represented specific stages in the bacterium lifecycle. Date: 07-Jun-2013    Time: 11:00:00    Location: 336 Novel semantic approaches in Genetic Programming. Stefano Ruberto Departamento de Engenharia Informática Abstract—Evolutionary algorithms are stochastic optimization techniques based on the principles of natural evolution and Genetic Programming (GP) belongs to this family . In recent years the study of GP systems has been extended to phenotypic aspects while in previous phase it was mainly focused on genotypic and syntactic aspects. Phenotype or semantic is utilized with the aim of optimizing the capacity of GP algorithms to explore the solution space in an effective way, classifying similar individuals and exploring new semantic areas, increasing the probability to find an optimal solution and to escape local optimum. Currently semantic GP is strictly related to the evaluation of individuals behavior in the candidate population: this kind of evaluation is mainly obtained through the fitness function itself. This work introduces a new way of measuring semantic similarity between individuals that is more independent from the fitness itself, allowing a fair comparison even when the finesses values involved are very far away from each other. This new measure enable a new series of techniques to be used to tackle the open problems in GP, like bloat and over-fitting, and also targeting the phenotypes variety preservation thereby enhancing performances. Preliminary results will be provided. A new theoretical GP algorithm based on this new semantic measure it is also introduced showing the potential advantages. Very early results coming from a first naive implementation show interesting insight on this potential comparing with others on the cutting edge algorithms. Date: 24-May-2013    Time: 11:00:00    Location: 336 Equilibria in a Repeated Epidemic Dissemination Game Xavier Vilaça Departamento de Engenharia Informática Abstract—Abstract: "Epidemic dissemination protocols are known to be extremely scalable and robust. As a result, they are particularly well suited to support the dissemination of information in large-scale peer-to-peer systems. In such an environment, nodes do not belong to the same administrative domain. On the contrary, many of these systems rely on resources made available by rational nodes that are not necessarily obedient to the protocol. There are two main incentive mechanisms that can be used to deal with rational behavior. One is to rely on balanced exchanges, which is feasible to implement in epidemic protocols where interactions are symmetric. For the asymmetric case, incentives based on a monitoring approach are more suited. Unfortunately, the literature does not provide any meaningful theoretical results for this last type of incentives. In this talk, I will present basic results that establish a tradeoff between the amount of information provided by a monitor and the ability to sustain cooperation among rational nodes, assuming a perfect monitoring." Xavier Vilaça is a PhD student at IST and a researcher of Distributed Systems Group at INESC-ID. He got a MSc degree in Computer Science and Engineering from IST in 2011 and a BSc also in Computer Science and Engineering from University of Minho in 2009. This work is being presented as a final report for the Complex Network Analysis course from the PhD program in Computer Science and Engineering at IST. Date: 10-May-2013    Time: 11:00:00    Location: 336 Technical Deep-Dive in a Column-Oriented In-Memory Database Martin Faust Hasso-Plattner-Institute Abstract—Column-oriented databases are on trend in industry (SAP HANA, Vertica) and academia (C-Store, MonetDB, HYRISE) alike. With the recent advances of hardware and the availability of machines with terabytes of RAM the idea of a main memory database becomes viable for large installations. The speed of main memory allows to rethink the classic separation between transactional and analytical systems and thereby provide a single-source-of-truth database system that allows users to run analytical queries on the up-to-date transactional data instead of a stale OLAP version. The talk will focus on our findings of the database utilization of large ERP customers and the conclusions that led to the design of an in-memory column store. We will cover the memory hierarchy and its implications on database design, basic data structures and usage examples that benefit from the ability to run analytical-style queries on the transactional data. The talk is a condensed version of our online lecture "In-Memory Data Management" which attracted over 10.000 students when held at openHPI.de in September 2012. Date: 29-Apr-2013    Time: 16:00:00    Location: 336 Novel semantic approaches in Genetic Programming. Stefano Ruberto Departamento de Engenharia Informática Abstract—Evolutionary algorithms are stochastic optimization techniques based on the principles of natural evolution and Genetic Programming (GP) belongs to this family . In recent years the study of GP systems has been extended to phenotypic aspects while in previous phase it was mainly focused on genotypic and syntactic aspects. Phenotype or semantic is utilized with the aim of optimizing the capacity of GP algorithms to explore the solution space in an effective way, classifying similar individuals and exploring new semantic areas, increasing the probability to find an optimal solution and to escape local optimum. Currently semantic GP is strictly related to the evaluation of individuals behavior in the candidate population: this kind of evaluation is mainly obtained through the fitness function itself. This work introduces a new way of measuring semantic similarity between individuals that is more independent from the fitness itself, allowing a fair comparison even when the finesses values involved are very far away from each other. This new measure enable a new series of techniques to be used to tackle the open problems in GP, like bloat and over-fitting, and also targeting the phenotypes variety preservation thereby enhancing performances. Preliminary results will be provided. A new theoretical GP algorithm based on this new semantic measure it is also introduced showing the potential advantages. Very early results coming from a first naive implementation show interesting insight on this potential comparing with others on the cutting edge algorithms. Date: 26-Apr-2013    Time: 11:00:00    Location: 336 Named-entity recognition in the past Gerrit Bloothooft Universitaet Utrecht Abstract—This talk will be about "Named-entity recognition in the past": the limited use of grapheme to phoneme conversion in this process, and possibilities to automatically learn variation in the spelling of names from rich historical data sources, such as full population vital registers. Bio: Gerrit Bloothooft works in the area of Phonetics and Speech Technology since 1978. He contributed to European educational networks in Language and Speech technology and was ISCA board member from 1997-2005. Over the years, his research interests moved from (singing) voice research to the application of speech technology and computational linguistics to data matching in the past. Date: 17-Apr-2013    Time: 10:00:00    Location: PA2, IST Alameda Identification of microRNAs and analysis of their expression in Eucalyptus globulus Jorge Oliveira Departamento de Engenharia Informática Abstract—Portugal is one the largest producers of pulp derived from Eucalyptus globulus, making it a fun- damental species for the country. The selection of adequate genotypes would make the exploitation of cultivation areas more efficient. A key objective is to understand the regulatory mechanisms impacting wood characteristics. Here we focus on microRNA-mediated regulation. MicroRNAs are endogenous molecules that act by silencing targeted messenger RNAs. Although approximately 21,000 microRNAs have been identified for many species, none is documented for the Eucalyptus genus. Here, we propose a pipeline that makes use of Cravela, a single-genome miRNA finding tool, and a new NGS data analysis algorithm that provides a novel scoring function to evaluate the expression profile of candidates. This approach produced a short list of candidates, including both conserved and non-conserved sequences. Experimental validation showed amplification in 4 out of 5 candidates chosen from the best-scoring non-conserved sequences. Date: 12-Apr-2013    Time: 11:00:00    Location: 336 SSL/TLS session-aware user authentication against man-in-the-middle attacks Rolf Oppliger eSECURITY Technologies Rolf Oppliger Abstract—In spite of the fact that SSL/TLS is omnipresent in todays Internet commerce, it is highly vulnerable to man-in-the-middle (MITM) attacks. In this talk, we explain why this is the case and what possibilities one has at hand to protect SSL/TLS-secured Internet commerce against MITM attacks. In particular, we introduce, discuss, and put into perspective a technology called SSL/TLS session-aware (TLS-SA) user authentication that basically links a user authentication to a particular SSL/TLS session to reveal the existence of an MITM. The technology does not protect against malware taking control after user authentication (a so-called man-in-the-browser attack). So TLS-SA does not stop the general trend towards transaction authentication in addition to user authentication for applications with high security requirements, such as Internet banking. Date: 10-Apr-2013    Time: 11:00:00    Location: 020 Towards OpenLogos Hybrid Machine Translation Anabela Barreiro Inesc-ID Abstract—In this presentation, I will describe the OpenLogos machine translation system, its architecture and its semantico-syntactic representation language (SAL), which is the heart of the system. I will show how OpenLogos has addressed classic problems of rule-based machine translation, such as those related to ambiguity and complexity of natural language. I will exemplify the kind of quality translation that OpenLogos is capable of and show how OpenLogos has an ideal platform for a hybrid machine translation solution. Date: 05-Apr-2013    Time: 15:00:00    Location: 020 INESC-ID Distinguished Lecture Series: Model Checking and the Curse of Dimensionality Prof. Edmund M. Clarke Carnegie Mellon University Abstract—Model Checking is an automatic verification technique for large state transition systems. It was originally developed for reasoning about finite-state concurrent systems. The technique has been used successfully to debug complex computer hardware and communication protocols. Now, it is beginning to be used for software verification as well. The major disadvantage of the technique is a phenomenon called the State Explosion Problem. This problem is impossible to avoid in worst case. However, by using sophisticated data structures and clever search algorithms, it is now possible to verify state transition systems with astronomical numbers of states. Date: 13-Mar-2013    Time: 11:00:00    Location: IST Alameda, sala EA1, Lisboa NLP-triggered, ontology-based KB enrichment strategies Nuno Silva Instituto Superior de Engenharia do Porto (ISEP) Abstract—Publicly available text-based documents (e.g. news, meeting transcripts) are a very important source of knowledge, especially for organizations. These documents refer domain entities such as persons, places and professional positions, decisions and actions. Querying these documents (instead of browsing, searching and finding) is a very relevant task for any person in general, and particularly for professionals dealing with intensive knowledge tasks. Querying text-based documents’ data, however, is not supported by common technology. For that, such documents’ content has to be explicitly and formally captured into KB facts. Making use of automatic NLP processes is a common approach, but their relatively low precision and recall give rise to data quality problems. Further, facts existing in the documents are often insufficient to answer complex queries, thus the need to enrich the captured facts with facts from third-part repositories (e.g. public Linked Open Data (LOD) repositories). While this description suggests an integration problem, addressing this issue includes more than that, namely duplicate detection, object mapping, consistency checking, consistency resolution and semantic and controlled data enrichment. This talk will describe a process for enriching the repository from LOD repositories. This process is triggered by the NLP parsing process and conducted by the constraints of the knowledge base’s underlying semantically rich ontology. The ontological constraints adopted by the ontology are interpreted and adopted as configuration data for the enrichment strategies. Strategies are responsible for actually enriching the knowledge base (i.e. add new instances and new properties for the instances) according to the interpretation of the constraints. Date: 01-Mar-2013    Time: 15:00:00    Location: 020 Re-Thinking Web Accessibility Vicki Hanson University of Dundee Abstract—Title: Re-Thinking Web Accessibility Abstract: Previous studies of Web accessibility have found little evidence for the impact of the Web Content Accessibility Guidelines, at least over the relatively short time periods examined. This talk presents new data from over 100 top-traffic and government websites over the 14 years since the publication of WCAG 1.0. Automated analyses of WCAG Success Criteria again found high percentages of violations overall. Unlike earlier studies, however, improvements on a number of accessibility indicators were found, with government sites being less likely than top-traffic non-government sites to have accessibility violations. Examination of the causes of success and failure suggests that improvements may be due, in part, to changes in website technologies and coding practices rather than a focus on accessibility per se. Possible contributors to improving accessibility include the use of new browser capabilities to create more sophisticated page layouts, a growing concern with improved page rank in search results, and a shift toward cross-device content design. Understanding these examples may inspire the creation of additional technologies with incidental accessibility benefits. The talk concludes with a look at how adapting even non-compliant Web content can improve accessibility for a broad range of people.

Short bio: Vicki Hanson is Professor of Inclusive Technologies at the University of Dundee, and Research Staff Member Emeritus from IBM Research. She has been working on issues of inclusion for older and disabled people throughout her career, first as a Postdoctoral Fellow at the Salk Institute for Biological Studies. She joined the IBM Research Division in 1986 where she founded and managed the Accessibility Research group. Her research examines the changing nature of technologies and the motivations and barriers to their use by populations in danger of digital exclusion, focusing on issues related to aging, cognition, and language. Applications she has created have received multiple awards from organizations representing older and disabled users. She is Past Chair of the ACM SIG Governing Board, Past Chair of ACM Special Interest Group on Accessible Computing (SIGACCESS), and is the founder and co-Editor-in-Chief of ACM Transactions on Accessible Computing. Prof Hanson is a Fellow of the British Computer Society and was named ACM Fellow in 2004 for contributions to computing technologies for people with disabilities. In 2008, she received the ACM SIGCHI Social Impact Award for the application of HCI research to pressing social needs. She currently is the ACM Secretary/Treasurer. We recently received the 2013 Anita Borg "Woman of Vision Award for Social Impact" and was just elected Fellow of the Royal Society of Edinburgh. Date: 25-Feb-2013    Time: 18:30:00    Location: 336 Organizational Learning and Support Tools André Luis Andrade Menolli Universidade Estadual do Norte do Paraná Abstract—Organizational learning is an area that helps companies to improve their processes significantly through the reuse of experiences. For a knowledge-intensive area such as software engineering, it is extremely important that the acquired knowledge to be stored and reused systematically. However, to make learning possible in software development companies is not an easy task, since it is an area in which processes and knowledge are usually internalized in the mind of their employees. Hence, it is necessary to create environments that promote and motivate information sharing and knowledge dissemination. Therefore, this work proposes a semantic collaborative environment based on Web 2.0 tools, learning objects and units of learning, in order to help improve organizational learning in software development teams. Date: 22-Feb-2013    Time: 17:30:00    Location: 020 PERCEPTUAL AND AUTOMATIC PROCESSING OF FRENCH ACCENTS Philippe Boula de Mareüil LIMSI-CNRS Abstract—The present work which focuses on regional, foreign and social accents in French, combines perceptual and acoustic approaches to account for variation due to speakers' geographic and (socio)linguistic backgrounds. It is based on large amounts of data, using measurement tools derived from automatic speech processing techniques to quantify certain trends. This work first aims at modelling identification and characterisation processes of regional and foreign accents in French. Perceptual experiments and acoustic analyses were carried out using automatic phoneme alignment, which could include pronunciation variants corresponding to Southern, Belgian, West-African, Maghrebian, English, German and Portuguese accents, among others. In total, over 100 hours of regional- or foreign-accented French were analysed. Some of the most discriminating pronunciation features, such as the realisation of nasal vowels in Southern French or the realisation of schwas (backed and closed) in Portuguese-accented French were ranked using automatic learning techniques. Since speech conveys both phonemic and prosodic information, the contribution of prosody to the perception of various accents was examined. The methodology included prosody modification/resynthesis techniques. The contribution of prosody was highlighted especially for the so-called banlieue accent, with a sharp pitch fall before a prosodic boundary. Modelling the production and perception of variation in speech is of major importance for understanding how language may evolve. Orientations for future work are proposed, to better take into account social factors especially and to link accents, speaking styles and expressive speech. Date: 13-Feb-2013    Time: 16:00:00    Location: 336 INESC-ID DISTINGUISHED LECTURE SERIES: Symbiotic Autonomy: Robots, Humans, and the Web Prof. Manuela Veloso Carnegie Mellon University, USA and INESC-ID Lisboa, IST Abstract—INESC-ID DISTINGUISHED LECTURE SERIES

BIO:

Title: CAMP - Computational Analysis of MicroRNAs in Plants, PTDC/EIA-EIA/122534/2010
Speaker: Paulo Fonseca

Title: NetDyn: Understanding real large networks, from structure to dynamics, PTDC/EIA-CCO/118533/2010
Speaker: Alexandre Francisco Date: 23-Sep-2011    Time: 14:30:00    Location: 336 On the Implementation of a Secure Musical Database Matching José Portêlo INESC-ID Lisboa and IST Abstract—This paper presents an implementation of a privacy-preserving music database matching algorithm, showing how privacy is achieved at the cost of computational complexity and execution time. The paper presents not only implementation details but also an analysis of the obtained results in terms of communication between the two parties, computational complexity, execution time and correctness of the matching algorithm. Although the paper focus on a music matching application, the principles can be easily adapted to perform other tasks, such as speaker verification and keyword spotting. Date: 09-Sep-2011    Time: 15:00:00    Location: 336 Towards Environmentally Robust Speech Applications Ramon Fernandez Astudillo Inesc-ID Abstract—The talk will review the work performed in this first ten months and introduce new tools and testing benchmarks. It will also focus on future objectives and will try to motivate potential joint work. This first year was focused on speech processing and dynamic adaptation of acoustic models to the environment. Future work will shift towards the use of context and language models for robustness as well as language processing robustness in the presence of partially missing or unreliable ASR transcriptions. For this reason I find the talk a good opportunity to get feedback on how to steer future research lines and start joint research. That also means that I will do my best to make the topic as understandable as possible (no formulas stampede). Date: 29-Jul-2011    Time: 14:30:00    Location: 336 The Delft Reconfigurable VLIW Processor Stephan Wong Technical University of Delft Abstract—In this presentation, we present the rationale and design the Delft reconfigurable and parameterized VLIW processor called rho VEX (rVEX in short). Its architecture is based on the Lx/ST200 ISA developed by HP and STMicroelectronics. We implemented the processor on an FPGA as an open-source softcore and made it freely available. Using the rVEX, we intend bridge the gap between general-purpose and application-specific processing through parametrization of many architectural and organizational features of the processor. The initial set of parameters include: instruction set (number and type of supported instructions), the number and type of functional units (FUs), issue-width (number of slots), register file size, memory bandwidth. The parameters can be set in a static or dynamic manner in order to provide the best performance or the best utilization of available resources on the FPGA. A complete toolchain including a C compiler and a simulator is freely available. Any application written in C can be mapped to the rVEX processor. This VLIW processor is able to exploit the instruction level parallelism (ILP) inherent in an application and make its execution faster compared to a RISC processor system. Recent developments will be presented. The rVEX is currently being further developed within an EU-funded project called ERA: Embedded Reconfigurable Architectures. Date: 26-Jul-2011    Time: 16:30:00    Location: 336 A Polymorphic Finite Field Multiplier Saptarsi Das Indian Institute of Science Abstract—In this work we present the architecture of a polymorphic multiplier for operations over various extensions of GF(2). We evolved the architecture of a textbook shift-and-add multiplier to arrive at the architecture of the polymorphic multiplier through a generalized mathematical formulation. The polymorphic multiplier is capable of morphing itself in runtime to create data-paths for multiplications of various orders. In order to optimally exploit the resources, we also introduced the capability of sub-word parallel execution in the polymorphic multiplier. The synthesis results of an instance of such a polymorphic multiplier shows about 41% savings in area with 21% degradation in maximum operating frequency compared to a collection of dedicated multipliers with equivalent functionality. We introduced the multiplier as an accelerator unit for field operations in the coarse grained runtime reconfigurable platform called REDEFINE. We observed 2.4x improvement in performance of the AES algorithm with the multiplier used as against software realization of multiplication kernels. Date: 22-Jul-2011    Time: 15:00:00    Location: 020 REDEFINE: Application Synthesis on Reconfigurable Silicon Cores Prof. S. Nandy Indian Institute of Science Abstract—Emerging embedded applications are based on evolving standards (ex. MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a Polymorphic ASIC in which specialized hardware units are composed from basic functional units at runtime. It is a ³future-proof² custom hardware solution for multiple applications and their derivatives in a domain. In this talk, I will provide a broad overview of the architecture of REDEFINE and its hardware aware application synthesis framework. REDEFINE comprises an array of Tiles interconnected in a Honeycomb network. Each Tile comprises a Compute Element and a Router. In the synthesis process, applications described in High Level Language (for ex: C) are compiled into application sub-structures. Each application substructure is hosted onto a set of Compute Elements on REDEFINE to form a Computational Structure that is a functional equivalent of a hardwired unit. In the application synthesis methodology for REDEFINE, application specific Computational Structures are composed and destroyed in both space and time for the different application substructures to support polymorphism in hardware. The characterization, diversity and multiplicity of the functional units in a Compute Element are domain specific. Thus, while the architecture of REDEFINE is application agnostic, the Compute Elements in REDEFINE can be chosen to be domain specific to enable synthesis of hardware accelerators on Reconfigurable Silicon Cores. Date: 21-Jul-2011    Time: 14:00:00    Location: Sala de reuniões do DEEC Pentaho Data Integration and the AgileBI Movement André Simões Xpand IT Abstract—O objectivo desta apresentação é explicar a funcionalidade da ferramenta de ETL ("Extract-Transform-Load") disponibilizada pela plataforma "open-source" Pentaho (supõe-se uma sessão mais interactiva de dúvidas/curiosidades sobre esta). Também será demonstrada a funcionalidade que permite mapear a realidade de desenvolvimento de ETL com metodologias agéis. Date: 12-Jul-2011    Time: 10:00:00    Location: 336 Towards a mathematical model of risk assessment of biocide induced antibiotic resistance Joana Coelho INESC-ID Lisboa and IST Abstract—Biocides have been widely used for several decades to preserve materials including food and cosmetics, to decontaminate surfaces, to disinfect instruments, used in fabrics and, even, in toys, for personal hygiene, and to prevent transmission of infections. Nevertheless, when used in large volumes or at high concentrations, biocides have toxic effects and excessive use is dangerous for the environment, including animals and humans. Despite this widespread and ever increasing use of biocides, most bacterial and fungal species remain susceptible but decreased susceptibility has been reported and occasionally linked to antibiotic resistance, mainly in human and veterinary pathogens. The problem of the development of resistances, together with the possibility to prevent them, has been carefully considered by the EC in the Biocides Directive 98/8/CE, a norm which oversees a high protection for the environment and man, and harmonizes the rules for placing on the market within the European Union any active substances and biocidal products. This work is developed in the context of the European project BIOHYPO (Proposal No 227258 of the Programme FP7 Cooperation Work Programme: Food, Agriculture and Fisheries, and Biotechnologies') (Dr. Marco Oggioni, PI). The main goal is the evaluation of the risk for clinically significant increase or spread of antibiotic resistance in food pathogens due to biocide use. Statistical analyses are performed in a large data set of Staphylococcus aureus in order to have insight about the real clinical relevance of any antibiotic/biocide co- and cross-resistance. Date: 08-Jul-2011    Time: 15:00:00    Location: Anf. Qa1.2 – Torre Sul (IST) Systems-of-Systems Engineering - The Engineering of Multiple Integrated Complex Systems Rui Santos Cruz Instituto Superior Técnico Abstract—This talk provides an overview of the concepts of "Systems Engineering" and "Systems-of-Systems Engineering", based on the following bibliography review: - A. Gorod, et al., “System-of-Systems Engineering Management: A Review of Modern History and a Path Forward,” IEEE Systems Journal, vol. 2, pp. 484 –499, Dec. 2008. /// - S. B. Johnson, “Three Approaches to Big Technology: Operations Research, Systems Engineering, and Project Management,” Technology and Culture, vol. 38, pp. 891–919, Oct. 1997. /// - A. Sousa-Poza, et al., “System of systems engineering: an emerging multidiscipline,” International Journal of System of Systems Engineering, vol. 1, pp. 1–17, Jan. 2008. /// - R. Valerdi, et al., “A research agenda for systems of systems architecting,” International Journal of System of Systems Engineering, vol. 1, pp. 171–188, Jan. 2008. Date: 08-Jul-2011    Time: 12:00:00    Location: 020 Conceptual Modelling in Information Systems Hugo Miguel Álvaro Manguinhas Instituto Superior Técnico Abstract— The set of concepts used to describe a particular domain of interest constitutes a conceptualization of that domain (i.e. a conceptual model). In the past decades, several modeling languages have been designed to ultimately express all the constraints of a conceptual model. This paper presents an overview of some of these languages evaluating their ability to express conceptual models. Date: 08-Jul-2011    Time: 11:30:00    Location: 020 Research challenges from Free Software Distributions Prof. Roberto di Cosmo University Paris VII Abstract—Free Software distributions, like Debian, RedHat, or Ubuntu, are some of the largest component based software systems, and they all use packages as their building blocks, together with tools for selecting, installing and removing packages on a running system.Evolving such complex software systems is a daunting task that carries significant challenges: in this talk, after providing a simple formalisation of packages and distributions, we will survey some recent results and algorithms developed to answer questions like "which is the most important package among the 27000 ones in Debian squeeze?", or "what version change is most likely to have an impact on the system"? Date: 06-Jul-2011    Time: 09:00:00    Location: sala I do Pav. de Informática II do IST Local identifiability of a HIV-1 infection model using a sensitivity approach João Gonçalo Silva Marques INESC-ID Lisboa and IST Abstract—The dynamic modeling of the Human Immunodeficiency Virus 1 (HIV-1) infection is still one of the great challenges in systems biology. The high prevalence of Acquired Immune Deficiency Syndrome (AIDS), known to be caused by HIV, and the fact that no cure has yet been discovered, confers relevancy to this area of study. In this paper, a dynamic model for the HIV-1 infection is analyzed. The sensitivity and identifiability issues are addressed with the purpose of optimizing the time points at which patients' blood samples should be drawn. This paper shows that there are time periods far more informative than others, thus improving parameter identifiability and estimability in the reverse engineering step. Date: 01-Jul-2011    Time: 14:00:00    Location: 04 Onto.PT: uma ontologia lexical para o Português, construída de forma automática Hugo Gonçalo Oliveira Universidade de Coimbra Abstract—Tendo em vista o panorama dos recursos lexicais para o Português e as dificuldades inerentes à construção manual de um recurso léxico-semântico amplo para uma língua, o projecto Onto.PT tem como objectivo explorar recursos textuais livres, como dicionários, thesaurus ou enciclopédias, na construção automática de uma ontologia lexical para a nossa língua. Nesta apresentação serão descritos os passos necessários para a, a partir de texto, se obter uma ontologia lexical, estruturada de forma semelhante a uma wordnet, ou seja, onde conceitos são representados por grupos de palavras sinónimas (synsets) e que, por sua vez se encontram ligados a outros conceitos através de relações semânticas. A apresentação será acompanhada de exemplos de resultados obtidos, incluídos na primeira versão deste recurso. Date: 22-Jun-2011    Time: 14:30:00    Location: 336 A mixture-of-experts approach to biclustering José Caldas Helsinki Institute for Information Technology Abstract—Biclustering is the unsupervised learning task of mining a data matrix for submatrices, known as biclusters, with desirable properties. For instance, the goal can be to find groups of genes that are co-expressed under particular biological conditions. Many biclustering methods do not allow biclusters to overlap; others do, but need to specify how the biclusters interact at the overlapping regions. It is therefore of interest to devise methods that allow flexible, overlapping bicluster structures while not forcing the practitioner to specify bicluster interaction models. We propose a mixture modelling framework allowing biclusters to overlap but not requiring the practitioner to postulate any parameter interaction models between biclusters. Sharing a similar intuition to mixture-of-experts models, our model allows biclusters to specify partly overlapping regions of expertise in which the biclusters are able to model the data adequately. The uncertainty over assignments of data points to biclusters depends on the membership of data points to these regions of expertise. We perform inference and parameter estimation via a variational expectation-maximization framework. The model is easily adaptable to different data types and compares favorably to other approaches, both in a binary DNA copy number variation data set and in a miRNA expression data set. Date: 17-Jun-2011    Time: 14:00:00    Location: 020 Parallel Video Coding on Multi-Core Platforms Svetislav Momcilovic Inesc-ID Abstract—This talk adresses scalable parallelization methods for real-time video coding, considering both conventional H.264/AVC and Distributed Video Coding (DVC), on multi-cores platforms, such as the most recent general purpose multi-cores, Graphical Processing Units (GPUs) and Cell Broadband Engine (Cell/BE). Date: 17-Jun-2011    Time: 10:00:00    Location: 336 The Statistical Phrase/Accent Model for Intonation Modeling Gopala Krishna Anumanchipalli Carnegie Mellon University, USA and INESC-ID Lisboa, IST Abstract—In this talk I will describe the newly developed statistical phrase accent model for generation of natural and expressive intonation contours in speech synthesis. I will briefly mention the conventional approaches and drawbacks of existing intonation models for speech synthesis. I will introduce the proposed statistical model, an associated training algorithm and performance on some tasks. This is joint work with Dr. Alan Black and Dr. Luis Oliveira. Date: 08-Jun-2011    Time: 14:30:00    Location: 336 A mixture-of-experts approach to biclustering José Caldas Helsinki Institute for Information Technology Abstract—Biclustering is the unsupervised learning task of mining a data matrix for submatrices, known as biclusters, with desirable properties. For instance, the goal can be to find groups of genes that are co-expressed under particular biological conditions. Many biclustering methods do not allow biclusters to overlap; others do, but need to specify how the biclusters interact at the overlapping regions. It is therefore of interest to devise methods that allow flexible, overlapping bicluster structures while not forcing the practitioner to specify bicluster interaction models. We propose a mixture modelling framework allowing biclusters to overlap but not requiring the practitioner to postulate any parameter interaction models between biclusters. Sharing a similar intuition to mixture-of-experts models, our model allows biclusters to specify partly overlapping regions of expertise in which the biclusters are able to model the data adequately. The uncertainty over assignments of data points to biclusters depends on the membership of data points to these regions of expertise. We perform inference and parameter estimation via a variational expectation-maximization framework. The model is easily adaptable to different data types and compares favorably to other approaches, both in a binary DNA copy number variation data set and in a miRNA expression data set. Date: 03-Jun-2011    Time: 14:00:00    Location: 020 Um sistema online para o tratamento à distância da afasia Annamaria Pompili INESC-ID and Universidade de Roma Tor Vergata Abstract—A afasia é uma deterioração da função da linguagem que pode causar problemas em aspetos muito variados da linguagem: a nível fonético, sintático ou semântico. Há vários tipos de afasia, cada uma é caraterizada por vários sintomas, mas todas as síndromes têm um problema comum: a dificuldade em nomear ações e objetos. Esta característica, aliada ao facto de frequentes sessões de terapia conduzirem a uma reabilitação mais rápida, levaram ao desenvolvimento do projeto VITHEA - Terapeuta Virtual para o Tratamento da Afasia. Para garantir a acessibilidade a qualquer lugar e a qualquer hora, o sistema foi desenvolvido como uma aplicação Web. A arquitetura software é estruturada segundo o padrão de desenho Model-View-Controller (MVC) e integra diversos "frameworks open source" para a plataforma Java EE. O sistema age como um terapeuta, guiando os doentes no desempenho das sessões de terapia, onde o efetivo reconhecimento da correção das expressões pronunciadas é efetuado por meio do reconhecedor automático da fala, AUDIMUS. Date: 01-Jun-2011    Time: 15:30:00    Location: 020 Rich Prior Knowledge in Learning for Natural Language Processing João Graça Inesc-ID Abstract—We possess a wealth of prior knowledge about most prediction problems, and particularly so for many of the fundamental tasks in natural language processing. Unfortunately, it is often difficult to make use of this type of information during learning, as it typically does not come in the form of labeled examples, may be difficult to encode as a prior on parameters in a Bayesian setting, and may be impossible to incorporate into a tractable model. Instead, we usually have prior knowledge about the values of output variables. For example, linguistic knowledge or an out-of-domain parser may provide the locations of likely syntactic dependencies for grammar induction. Motivated by the prospect of being able to naturally leverage such knowledge, four different groups have recently developed similar, general frameworks for expressing and learning with side information about output variables. These frameworks are Constraint-Driven Learning (UIUC), Posterior Regularization (UPenn), Generalized Expectation Criteria (UMass Amherst), and Learning from Measurements (UC Berkley). This tutorial describes how to encode side information about output variables, and how to leverage this encoding and an unannotated corpus during learning. We survey the different frameworks, explaining how they are connected and the trade-offs between them. We also survey several applications that have been explored in the literature, including applications to grammar and part-of-speech induction, word alignment, information extraction, text classification, and multi-view learning. Prior knowledge used in these applications ranges from structural information that cannot be efficiently encoded in the model, to knowledge about the approximate expectations of some features, to knowledge of some incomplete and noisy labellings. These applications also address several different problem settings, including unsupervised, lightly supervised, and semi-supervised learning, and utilize both generative and discriminative models. The diversity of tasks, types of prior knowledge, and problem settings explored demonstrate the generality of these approaches, and suggest that they will become an important tool for researchers in natural language processing. The tutorial will provide the audience with the theoretical background to understand why these methods have been so effective, as well as practical guidance on how to apply them. Specifically, we discuss issues that come up in implementation, and describe a toolkit that provides "out-of-the-box" support for the applications described in the tutorial, and is extensible to other applications and new types of prior knowledge. Date: 27-May-2011    Time: 15:30:00    Location: 336 Reconfiguration schemes to mitigate faults in automated irrigation channels Erik Weyer University of Melbourne Abstract—The infrastructure associated with an automated irrigation channel contains a large number of actuators (electro-mechanical gates) and water level and gate position sensors. Actuator and sensor faults happen from time to time, and they will lead to loss of water and reduced service to farmers unless corrective action is taken. In this seminar we will look at strategies for dealing with such faults. Sensor faults are dealt with by estimating the value of the signal measured by the faulty sensor and using the estimated signal as input to the controllers. An actuator fault necessitates a relaxation of the control objectives, and techniques for reconfiguring the controller to meet the new objectives will be presented. Experimental results from an operational irrigation channel will be shown demonstrating that despite faults, good control performance can be achieved without disruption of the of the operation of the channel. Date: 26-May-2011    Time: 14:30:00    Location: IST - Torre Norte, Level 5, room 5.9 System identification and control of irrigation channels E. Weyer University of Melbourne Abstract—Water is an increasingly scarce resource in many parts of the world, and it is important to manage the water resources well. This is of particular importance in networks of irrigation channels where the operational losses are large. In irrigation channels the flows and water levels can be controlled by manipulating the positions of mechanical gates located along the channel, and the water losses can be significantly reduced by automating the operation of the channel networks using closed loop control. In this talk we will briefly present system identification techniques for obtaining models of irrigation channels useful for control design. The control objectives usually involve a trade-off between minimum wastage of water and quality of the water delivery to the farmers. Different control configurations (centralized, decentralized and distributed) for achieving the objectives will be presented, together with control design methods, with emphasis on frequency domain techniques. Experimental results from operational irrigation channels will be presented. Date: 25-May-2011    Time: 15:30:00    Location: IST - Torre Norte, Level 5, room 5.9 Multi-agent control for coordination in water infrastructures and intermodal transport networks R. Negenborn Delft University of Technology Abstract—In this talk we discuss how multi-agent model predictive control can be used for control of large-scale transport infrastructures. Such a control approach has the potential to operate transport infrastructures closer to their capacity limits, while taking into account the increasingly complex dynamics. We will in this talk particularly focus on applications in the domain of water infrastructures and networks of interconnected transport hubs. Date: 25-May-2011    Time: 14:30:00    Location: IST - Torre Norte, level 5, room 5.9 Recovering Capitalization and Punctuation Marks on Speech Transcriptions Fernando Batista INESC-ID Lisboa and IST and ISCTE Abstract—This presentation addresses two important metadata annotation tasks, involved in the production of rich transcripts: capitalization and recovery of punctuation marks. The main focus of this study concerns broadcast news, using both manual and automatic speech transcripts. Different capitalization models were analysed and compared, indicating that generative approaches capture the structure of written corpora better, while the discriminative approaches are suitable for dealing with speech transcripts, and are also more robust to ASR errors. The so-called language dynamics have been addressed, and results indicate that the capitalization performance is affected by the temporal distance between the training and testing data. In what concerns the punctuation task, this study covers the three most frequent marks: full stop, comma, and question mark. Early experiments addressed full-stop and comma recovery, using local features, and combining lexical and acoustic information. Recent experiments also combine prosodic information and extend this study to question marks. Date: 25-May-2011    Time: 14:30:00    Location: 020 Controlo distribuído de canais de água José Igreja, Filipe Cadete, Luís Pinto Inesc-ID Abstract—O seminário será apresentado por José Igreja, Filipe Cadete e Luís Pinto do Grupo Controlo de Sistemas Dinâmicos Após uma revisão concisa dos algoritmos de controlo preditivo descentralizado, será apresentado um algoritmo preditivo com restrições de estabilidade, descentralizado. Serão apresentados e discutidos resultados experimentais obtidos no canal experimental do NuHCC (Univ. de Évora) com este algoritmo e com diversas versões do LQG/LTR multivariável e descentralizado. Date: 13-May-2011    Time: 11:00:00    Location: Sala de reuniões do DEEC, Torre Norte, Piso 5, IST Alameda Robust linear regression methods in association studies Vanda Lourenço FCT/UNL, Dep. Mathematics and IST/UTL, CEMAT Abstract—Motivation: It is well known that data deficiencies, such as coding/rounding errors, outliers or missing values, may lead to misleading results for many statistical methods. Robust statistical methods are designed to accommodate certain types of those deficiencies, allowing for reliable results under various conditions. We analyze the case of statistical tests to detect associations between genomic individual variations (SNP) and quantitative traits when deviations from the normality assumption are observed. We consider the classical ANOVA tests for the parameters of the appropriate linear model and a robust version of those tests based on M-regression. We then compare their empirical power and level using simulated data with several degrees of contamination. Results: Data normality is nothing but a mathematical convenience. In practice, experiments usually yield data with nonconforming observations. In the presence of this type of data, classical least squares statistical methods perform poorly, giving biased estimates, raising the number of spurious associations and often failing to detect true ones. We show through a simulation study and a real data example, that the robust methodology can be more powerful and thus more adequate for association studies than the classical approach. Date: 06-May-2011    Time: 14:00:00    Location: 020 Using perspective schemata and a reference model for helping in the design of data integration systems Valéria Magalhães Pequeno Faculdade de Ciências e Tecnologia da UNL Abstract—Sharing and integrating information across multiple autonomous and heterogeneous data sources has emerged as a strategic requirement in modern business. We deal with this problem by proposing a declarative approach based on the creation of a reference model and perspective schemata. The former serves as a common semantic meta-model, while the latter defines correspondence between schemata. Furthermore, using the proposed architecture, we have developed an inference mechanism which allows the (semi-) automatic derivation of new mappings between schemata from previous ones. Date: 04-May-2011    Time: 15:30:00    Location: N7.1 Speeding up information extraction using sub-optimal algorithms Gonçalo Fernandes Simões INESC-ID Lisboa and IST Abstract—Information Extraction (IE) proposes techniques capable of extracting, from unstructured text, relevant segments in a given domain and represent them in a structured format. Most of the scientific proposals in IE so far aim at increasing the accuracy of the extraction results. However, the existing IE techniques still have efficiency problems when processing large data volumes. IE optimization aims at executing IE processes as fast as possible with minimal or no impact in the accuracy results. In this talk, we will first describe the state of the art in IE optimization. Then, we will present a novel approach for IE optimization. The key idea is to make IE programs faster by using sub-optimal extraction algorithms, which are are typically fast but may produce some erroneous results or not produce some of the results of traditional algorithms (thus, leading to a negative impact on the recall and precision values). We propose a cost model that is able to evaluate not only the expected execution time of a given IE execution plan but also the quality of the results produced, in terms of the expected number of good and bad tuples. Using this cost model, our solution is able to choose a fast execution that is able to fulfill a set of objectives imposed by a user (e.g., minimum number of good tuples desired, minimum precision desired). Finally, we will report the preliminary experimental results obtained with two data sets and three IE programs, that show the gains brought by our approach with respect to the state-of-the-art solutions. Date: 15-Apr-2011    Time: 16:00:00    Location: N7.1 Stochastic Modeling of Stem Cell Induction Protocols Filipe Gracio INESC-ID Lisboa and IST Abstract—Generation of pluripotent stem cells starting from adult human cells using induction processes is a technology that has the potential to revolutionize regenerative medicine. However, the production of these so called iPS cells is still quite inefficient and may be dominated by stochastic effects. In this work we build mass action models of the core circuitry controlling stem cell induction and maintenance. The model includes not only the network of transcription factors NANOG, OCT4, SOX2, but also important epigenetic regulatory features of DNA methylation and histone modifications. We are able to show that the network topology reported in the literature is consistent with the observed experimental behavior of bistability and inducibility. Based on simulations of stem cell generation protocols we show that cooperative and independent reaction mechanisms have experimentally identifiable differences in the dynamics of reprogramming, and we analyze such differences and their biological basis. It had been argued that stochastic and elite models of stem cell generation represent distinct fundamental mechanisms. Work presented here illustrates the possibility that rather they represent differences in the amount of information we have about the distribution of cellular states before and during reprogramming protocols. We show that unpredictability decreases as the cell moves through the necessary induction stages, and that identifiable groups of cells with elite-like behavior can come about by stochastic process. We also show how different mechanisms and kinetic properties impact the prospects of improving the efficiency of iPS cell generation protocols. Date: 15-Apr-2011    Time: 14:00:00    Location: 020 Efficient algorithms for the identification of miRNA motifs in DNA sequences Nuno D. Mendes INESC-ID Lisboa and IST Abstract—In the last decade, a novel gene expression regulatory mechanism was discovered. It is mediated by RNA molecules named miRNAs and it acts by silencing target genes. Despite the advancements in this research field, we are still not able to rigourously characterise miRNA genes in order to identify the sequence, structure or contextual requirements that are needed to obtain a functional miRNA. In this work we present a strategy to sieve through the vast amount of stem-loops that can be found in metazoan genomes, significantly reducing the set of candidates while retaining most known miRNA precursors. The approach relies on a combination of robustness measures, on the analysis of precursor structure, and on the incorporation of information about the transcription potential of each candidate. The methodology was applied to the genomes of Drosophila melanogaster and Anopheles gambiae, and to homologs of known precursors in the newly sequenced Anopheles darlingi. We obtain, thus, a restricted and ordered set of candidates for these organisms which fulfil the established prerequisites. Keywords: miRNA, gene finding, single-genome, robustness, stability, secondary structure Date: 01-Apr-2011    Time: 14:00:00    Location: 020 Prospecção de Texto e Jornalismo Computacional Mário J. Silva INESC-ID Lisboa and IST Abstract—O manancial de informação com que somos actualmente confrontados requer novas práticas de jornalismo que permitam monitorizar, interpretar e resumir notícias, e novos modelos para apresentar conteúdos dinâmicos, interactivos e integrados. Esta visão preside aos recentes trabalhos em "jornalismo computacional". Alguns dos desafios que se colocam presentemente nesta área prendem-se com (i) a análise automática de conteúdos, (ii) a análise automática de redes sociais explícitas e implícitas, (iii) o desenho de interfaces ricas em termos da visualização e interacção, para a apresentação de notícias dinâmicas e personalizadas e para a aprendizagem de relações implícitas entre notícias e comunidades de leitores, e (iv) o estudo de casos em ambiente de produção para avaliação das metodologias de jornalismo computacional. Esta apresentação discutirá estes desafios e a forma como começaram a ser endereçados no âmbito do projecto REACTION, uma iniciativa recente do programa UTAustin|Portugal. Date: 22-Mar-2011    Time: 11:30:00    Location: 336 Computational Methods for DNA Resequencing: A Survey Francisco Fernandes Inesc-ID Abstract—Recent developments in next-generation sequencing technologies allow constantly increasing throughput and shorter running times while reducing the costs of the sequencing process. This leads to the production of huge amounts of data which raise important computation challenges not only due to the large volume of information but also to the increase of the reads length and sequencing errors. Several assembly and mapping tools have recently been developed for generating assemblies from short, unpaired sequencing reads. However, the need for faster and more accurate algorithmic approaches to keep up with the demand of frequently emerging resequencing projects, justify the growing number of short read mapping tools that surfaced in the last couple of years. In this report we present an overview of the state of the art software applications detailing their algorithms and data structures. Date: 18-Mar-2011    Time: 17:00:00    Location: 020 Nasalidade Vocálica em Português - Pistas para identificação forense de falantes Manuel Domingos INESC-ID and Centro de Linguística da Universidade de Lisboa Abstract—A tese “Nasalidade Vocálica em Português: Pistas para identificação Forense de falantes” tem como objectivos a constituição de pistas para identificação forense de falantes e a discussão sobre a representação fonológica da nasalidade vocálica em Português. Na tese foram analisados os correlatos acústicos das vogais nasais nos sistemas do Português Europeu (PE) e do Português Angolano (PA). Desta forma, foram analisadas as frequências dos dois primeiros formantes (F1 e F2) e a Frequência Fundamental (F0) das cinco vogais nasais, na parte oral e na parte nasal, considerando as suas características articulatórias. Também foram medidas as durações dos três eventos de cada vogal nasal (i.e., parte oral, parte nasal e apêndice nasal) e as durações da oclusão, da explosão e do VOT das oclusivas, [+voz] e [-voz], adjacentes à direita. Dos resultados obtidos das análises feitas, foram relevantes as diferenças quanto à qualidade vocálica e à duração dos eventos acústicos analisados nas cinco vogais nasais. Desta forma, os dois sistemas (PE e PA) distinguem-se pelos níveis de abertura, assim como pelo avanço ou recuo da língua, tendo em consideração os valores de F1 e de F2 de cada uma das cinco vogais nasais. Quanto à duração, foi possível verificar que as vogais nasais são mais longas no PA do que no PE. Relativamente à representação fonológica da nasalidade, foram verificadas produções que podem ser interpretadas como outputs que remetem para uma mesma representação da nasalidade vocálica nos dois sistemas. Contudo, algumas produções idiossincráticas permitiram também considerar a possibilidade da ocorrência de uma consoante nasal homorgânica com a oclusiva seguinte no PA. Relativamente à identificação de falantes, as pistas consistiram nas particularidades do sistema e do respectivo sexo, tendo-se encontrado possibilidades de identificação quer ao nível da qualidade vocálica e das trajectórias dos formantes e de F0, quer ao nível dos vários aspectos de duração dos eventos acústicos considerados na tese. Date: 16-Mar-2011    Time: 14:30:00    Location: 336 Feature extraction for content-based recommendation - Mining the long tail Paula Vaz Lobo Inesc-ID Abstract—The large amount of available items for consumption surpasses our processing capabilities. New content (books, news, music, video, etc.) is published every day, highly exceeding our capacity to make informed choices. The items that we do not know become potentially useless, because we are not aware of its existence and cannot specifically search for them. Current recommendation systems try to predict what we want to consume. Nevertheless, quite often tend to recommend popular items, because they are mostly based on ratings. This phenomenon shapes the consumer curve as a Pareto's distribution placing popular rated items in the "head" (the first 20% of the total items) and the unpopular unrated items in the "long tail" (the rest 80%). Items in the long tail have a recognized interest for smaller groups of people. However, current recommendation systems are failing to reveal the unpopular items, because of the rating scarcity. There is a need to assist people finding interesting unrated items in the long tail. In this thesis we explore textual features of documents in long tail. We explore document content to find similar documents using a top-N recommendation algorithm. We use semantic similarity (documents about the same subjects) as well as stylometric similarity (documents with similar types of writing style) to find documents that are closer to user preferences. Document similarity is measured using documents semantic and stylometric features. The combination of these two features type can improve recommendations novelty and help people find interesting documents in the long tail. Date: 09-Mar-2011    Time: 14:30:00    Location: 336 Control-based Clause Sharing in Parallel SAT Youssef Hamadi Microsoft Research Abstract—Clause sharing is key to the performance of parallel SAT solvers. However, without limitation, sharing clauses might lead to an exponential blow up in communication or to the sharing of irrelevant clauses. This talk presents two innovative policies to dynamically adjust the size of shared clauses between any pair of processing units. The first approach controls the overall number of exchanged clauses whereas the second additionally exploits the relevance quality of shared clauses. Date: 28-Feb-2011    Time: 16:00:00    Location: 336 A Tutorial on Genetic Programming Sara Silva Inesc-ID Abstract—Genetic Programming (GP) is the youngest paradigm inside the Artificial Intelligence field called Evolutionary Computation. Created by John Koza in 1992, it can be regarded as a powerful generalization of Genetic Algorithms, but unfortunately it is still poorly understood outside the GP community. The goal of this tutorial is to provide motivation, intuition and practical advice about GP, along with very few technical details. Date: 15-Feb-2011    Time: 13:00:00    Location: IST, Room PA2 Methods for the Detection of Multilocus Interactions Orlando Anunciação INESC-ID Lisboa and IST Abstract—In recent years there has been intense research to find genetic factors that influence common complex traits. The approach that is commonly followed to discover those associations between genetic factors and complex traits such as diseases is to perform a Genome-Wide Association Study (GWAS). It has been pointed out that there is no single marker for disease risk and no single protective marker but, rather, a collection of markers that confer a graded risk of disease. As an example of this, it has been suggested that many genes with small effects rather than few genes with strong effects contribute to the development of asthma. For human height the heritability explained with SNPs discovered with GWAS is about 5%. However, a recent study showed that it is possible to explain around 45% of the phenotypic variance for height with GWAS data. The problem is that the individual effects of the interacting SNPs are too small to be detected with common statistical methods. This shows that there is a need for powerful methods that are able to consider interactions between SNPs with low marginal effects. In this document we describe a wide range of methods that have been proposed to detect interactions between SNPs in association studies data. We will give examples of statistical methods (explaining also how to deal with the multiple testing problem), search methods (deterministic and stochastic) and machine learning methods. Date: 11-Feb-2011    Time: 14:00:00    Location: 336 Efficient Arithmetic Operators Applied to DSP Architectures Eduardo Costa Universidade Católica de Pelotas Abstract—This presentation is based on the research topics that professor Eduardo Costa has been working in Brazil. The presentation has been divided into three main topics which are entitled: Efficient Dedicated Multiplication Blocks for 2s Complement Radix-2m Array Multipliers; Fast Forward and Inverse Transforms for the H.264/AVC Standard Using Hierarchical Adder Compressors; and Radix-2 Decimation in Time (DIT) FFT Implementation Based on a Matrix-Multiple Constant Multiplication Approach. In the first topic it is presented the improvement of radix-2m binary array multiplier architecture that was previously proposed in literature. For this purpose it will be presented the use of a different scheme in order to optimize the dedicated modules that perform the radix-16 and radix-256 multiplication. In the second topic, efficient adder compressors are used in order to reduce the computational complexity of the Forward and Inverse Transforms, where Compressor-Based architectures for H.264/AVC transforms are developed. Finally, in the third topic, the main goal is the implementation of fully-parallel radix-2 Decimation in Time (DIT) Fast Fourier Transform - FFT, using the Matrix-Multiple Constant Multiplication (M-MCM) at gate level.

Eduardo da Costa received the five-year engineering degree in electrical engineering from the University of Pernambuco, Recife, Brazil, in 1988, the M.Sc. degree in electrical engineering from the Federal University of Paraiba, Campina Grande, Paraíba, Brazil, in 1991, and the Ph.D. degree in computer science from the Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2002. Part of his doctoral work was developed at the Instituto de Engenharia de Sistemas e Computadores (INESC-ID), Lisbon, Portugal. He is currently a Professor with the Departments of Electrical Engineering and Informatics, Catholic University of Pelotas (UCPel), Pelotas, Brazil. He was a post-doc at UFRGS from November 2009-April-2010. He is Master thesis Advisor in the Program in Computer Science, UCPel. His research interests are VLSI architectures and low-power design.

Date and local
Tuesday, February, 8 2011, 11h00, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 29-Sep-2010    Time: 13:30:00    Location: 336 Distributed Compensations with Interruption in Long-Running Transactions Roberto Bruni Universitá di Pisa Abstract—(joint work with Anne Kersten, Ivan Lanese, Giorgio Spagnolo)
Compensations are a well-known and widely used mechanism to ensure the consistency and correctness of long-running transactions in the area of databases. More recently, several calculi emerged in the area of business process modelling, service-oriented and global computing to provide the necessary formal ground for compensation primitives like those exploited in orchestration languages like WS-BPEL. The focus of this work is on the compensation policy to select for parallel branches. The choice of the right strategy allows the user to prevent unnecessary actions in case of an abort. In the past different policies have emerged in cCSP and Sagas. We propose new, optimal, operational and denotational semantics for parallel kernel of cCSP/Sagas with interruption and prove the correspondence between the two. The new semantics guarantees that distributed compensations may only be observed after a fault actually occurred.

Biography
Vagner Rosa is assistant professor at the Federal University of Rio Grande (FURG - Rio Grande - Brazil), and is currently a full-time PhD student at Federal University of Rio Grande do Sul (UFRGS - Porto Alegre - Brazil). His thesis work is the development of hardware architectures for video encoding according to the H.264 standard. His C.V. can be viewed at CNPQ.

Date and local
Wednesday, December, 9 2009, 11h00, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date and local
Tuesday, November, 3 2009, 11h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 03-Nov-2009    Time: 11:30:00    Location: 336 Hacking life: how to build a new life form in your computer Arlindo L. Oliveira Inesc-ID Abstract—Synthetic biology is a new field of research that combines computer models of biological systems with DNA synthesis and genetic engineering techniques in order to design and build new biological functions, systems and organisms. While still in its infancy, this area of research is expected to develop rapidly, so that very soon researchers, companies and hackers will be able to design, build and release in the wild new organisms. In this talk, I will address some questions and challenges posed by this technology, and, in particular, the role that will be played by research areas such as Systems Biology, Bioinformatics and Information Systems in the design of artificial life forms. Date: 23-Oct-2009    Time: 14:00:00    Location: 336 Solving Implicit Problems and Using Cyclic Graphs for Graphics Brian Wyvill University of Victoria Abstract— The talk will be divided into two parts. In the first part implicit blending is discussed and in the second cyclic scene graphs. Blending is both the strength and the weakness of implicit surfaces. While it gives them the unique ability to smoothly merge into a single, arbitrary shape, it makes implicit modelling hard to control since implicit surfaces blend at a distance, in a way that heavily depends on the slope of the field functions that define them. We have found that to be more intuitive and easy to control, blends should be located where two objects intersect, while enabling other parts of the objects to come as close to each other as desired without being deformed. Our solution relies on automatically defined blending regions around the intersection curves between two objects. Outside of these volumes, a clean union of the objects is computed thanks to a new operator that guarantees the smoothness of the resulting field function; meanwhile, a smooth blend is generated inside the blending regions. This talk describes joint work done with French researchers, Marie-Paule Cani, Loic Barthe and Adrien Berhardt.

The second half of the talk describes work on scene graphs. Conventional scene graphs use directed acyclic graphs. We investigate scene graphs with recursive cycles for defining graphical scenes. This permits both conventional scene graphs and iterated function systems within the same framework and opens the way for other definitions not possible with either. We explore several mechanisms for limiting the implied recursion in cyclic graphs, including both global and local limits. This approach permits a range of possibilities, including scenes with carefully controlled and locally varying recursive depth. It has applications in art and design, and opens up interesting avenues for future research. The second half of the talk describes work done with Prof. Neil Dodgson, University of Cambridge.

Bio:

Brian Wyvill graduated from the University of Bradford, Uk with a PhD in computer graphics in 1975. As a post-doc he worked at the Royal College of Art and helped make some animated sequences for the Alien movie. He emigrated to Canada in 1981 where he has been working in the area of implicit modeling, sometimes with his brother Geoff Wyvill (University of Otago). He is also interested in sketch based modeling and NPR and enjoys combining these areas of research. In 2007 Brian took up an appointment as Professor and Canada Research Chair at the University of Victoria, British Columbia. Date: 15-Oct-2009    Time: 12:00:00    Location: 336 SMART-System - Metadata-based Sports Video Database, its Development and Experience Chikara Miyaji Japan Institute of Sports Sciences Abstract—SMART-system is a video database developed by Japan Institute of Sports Sciences (JISS). More than 12 NSFs are using the system, and 39,601 video files and 380129 meta-data are archived at the date of August 2009. SMART-system is a metadata-based movie database specialized for sports performance training and coaching. The main characteristics are summarized as follows: (1) based on streaming technology which is enhanced for sports movement analysis, (2) metadata-based searching and requesting system specialized for sports, (3) coaching annotation system, and (4) authentication system for distributed streaming servers. At my Talk, technical aspects and internal structure of the system will be explained. And also the experience of providing this software system will be described, especially, from the view point of sports specific difficulties and solutions. Date: 13-Oct-2009    Time: 15:00:00    Location: 336 Preparing a cyanobacterial chassis for H2 production: a synthetic biology approach Catarina Pacheco Institute for Molecular and Cell Biology (IBMC) Abstract—Molecular hydrogen (H2) is an environmentally clean energy carrier that can be a valuable alternative to the limited fossil fuel resources of today. The BioModularH2 project aims at designing reusable, standardized molecular building blocks that integrated into a “chassis” will result in a photosynthetic bacterium containing engineered chemical pathways for competitive, clean and sustainable hydrogen production. For this project the unicellular cyanobacterium Synechocystis sp. PCC 6803 (Synechocystis) is being used as the photoautotrophic “chassis” for this project. To prepare the chassis for an optimal H2 production, the Synechocystis native bidirectional hydrogenase was inactivated. Later on, a synthetic circuit containing a heterologous highly efficient hydrogenase will be introduced into the “chassis”. Due to hydrogenase sensitivity to molecular oxygen, and to provide the anaerobic environment required for an optimal heterologous hydrogenase activity, synthetic oxygen consuming devices are being prepared based on native and heterologous enzymes that use O2 as substrate, and will be subsequently tested. Finally, the integration of the designed synthetic circuits into the “chassis” will provide an anaerobic environment within the cell for an optimized and highly active hydrogenase. Date: 09-Oct-2009    Time: 14:00:00    Location: 336 Power Macro-Modelling using an Iterative LS-SVM Method J. Monteiro Inesc-ID Abstract—In this talk I will describe a new method for power macromodeling of functional units for high-level power estimation based on Least-Squares Support Vector Machines (LS-SVM). This method improves the already good modeling capabilities of the basic LS-SVM method in two ways. First, a modified norm is used that is able to take into account the weight of each input for global power consumption in the computation of the kernels. Second, an iterative method is proposed where new data-points are selectively added as support-vectors to increase the generalization of the model. The macromodels obtained provide not only excellent accuracy on average (close to 1% error), but more importantly, thanks to our proposed modified kernels, we were able to reduce the maximum error to values close to 10%

Date and local
Tuesday, October, 6 2009, 11h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)

Jorge Fernandez Villena received a degree in Telecommunication Engineering from the E.S.T.I.I.T. at Universidad de Cantabria, Spain, in 2005. He is currently working towards a Ph.D. degree in Electrical and Computer Engineering at the Instituto Superior Técnico, Technical University of Lisbon, Portugal, and he is a researcher in the ALGOS Group, at INESC-ID. His research interests include integrated circuits interconnect modeling and simulation, with emphasis in numerical algorithms for parametric model order reduction.

Date and local
Tuesday, July, 21 2009, 11h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 21-Jul-2009    Time: 11:30:00    Location: 336 Toward Energy-efficient Computing David Brown Sun Microsystems Inc. Abstract—As a result of both the increased average power consumed by a single system, and the rapid growth in the number of total computer systems deployed, energy consumption by computers and related technologies is growing at an exponential rate analogous to Moore’s Law. The use of energy has become a consequential factor in the design of contemporary computer systems.
This talk frames the energy problem in general, looking at its current implications in the computing space. I’ll introduce several of the basic technologies that have been introduced which may help us to manage power use on modern computing platforms, then describe some recent experience in their application as seen from my vantage point at Sun. The conclusion, is that while some of these mechanisms are enabling, they seem far from sufficient to realise optimal energy use in computing. How should the energy problem be framed more specifically for computer system designers?
I will give a simple vision for energy-efficient computing, and describe a number of the elements that appear necessary if we are to solve it along those lines. Some likely avenues of research are suggested.

David Brown is presently working on the Solaris operating system’s core power management facilities, with particular attention to Sun’s x64 hardware platforms. Earlier at Sun he led the Solaris ABI program: a campaign to develop and deliver a practical approach to binary compatibility for applications built on Solaris.
Before coming to Sun, Dave was a member of the research staff at Stanford University, where he worked with Andy Bechtolsheim on the prototype SUN Workstation; later was a founder of Silicon Graphics, where he developed early system and network software and designed a floating point accelerator; and subsequently established the Workstation Systems Engineering Group for DEC in Palo Alto along with Steve Bourne, where he built the team that developed the graphics architecture applied in DEC ’s MIPS workstations and the PixelStamp and PixelVision subsystems.
Dave’s technical background is computer systems (operating systems and networking), and architecture with some specific attention to the design of high-performance interactive graphics systems.
Dave received a Ph.D. in Computer Science from Cambridge University, for a dissertation which introduced the “Unified Memory Architecture” approach for the integration of high performance graphics subsystems in a general-purpose computing architecture. This idea is now widely applied, notably in the current Intel processor and memory system architecture.

Érika Fernandes Cota has obtained her Ph.D. degre in Computer Science at the UFRGS - Federal University of Rio Grande do Sul in 2003. Currently, she is assistant professor at the Federal University of Rio Grande do Sul. She has 8 articles published in specialized journals and over 40 works in conferences. She co-supervised 2 master dissertations. She works in the area of computer science, with emphasis on hardware and software IC testing. In her professional activities interacted with other 36 researchers in co-authorship of scientific papers. The most significant scientific and technological areas are test, BIST, embedded self test, fault-tolerant systems, integrated systems and embedded systems.

Date and local
Monday, June, 22 2009, 15h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)

Date and local
Wednesday, April, 8 2009, 14h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 08-Apr-2009    Time: 14:30:00    Location: 336 A MILP-based Approach to Path Sensitization of Embedded Software José Carlos Campos Costa Inesc-ID Abstract— We propose a new methodology based on Mixed Integer Linear Programming (MILP) for determining the input values that will exercise a specified execution path in a program. In order to seamlessly handle variable values, pointers and arrays, and variable aliasing, our method uses memory addresses for variable references. This implies a dynamic methodology where all decision are taken as the program executes. During this execution, we gather constraints for the MILP problem, whose solution will directly yield the input values for the desired path. We present results that demonstrate the effectiveness of this approach. This methodology was implemented into a fully functional tool that is capable of handling medium sized real programs specified in the C language. Our work is motivated by the complexity of validating embedded systems and uses a similar approach to an existing HDL functional vector generation. We are currently integrating this method with the mentioned hardware method. The joint solution of the MILP problems will provide a hardware/software co-validation tool.

Date and local
Wednesday, April, 8 2009, 14h30, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)

Date and local
Friday, March, 6 2009, 14h00, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 06-Mar-2009    Time: 14:00:00    Location: 336 Programming Distributed Systems: an Introduction to MPI J. Monteiro Inesc-ID Abstract—The Message Passing Interface Standard (MPI) is a standard defined by the MPI Forum, which has over 40 members, including vendors, researchers, software library developers, and users. The goal of MPI is to establish a portable, efficient, and flexible standard for writing distributed programs based on message passing. MPI is not an IEEE or ISO standard, but has become the de facto "industry standard" for writing message passing programs on HPC platforms.
In this talk I will present a gentle introduction to MPI. I will start by discussing the message passing programming model. Then I will cover the basics of MPI, present one-to-one communication and collective operations.
"I am not sure how I will program a Petaflop machine, but I am sure that I will need MPI somewhere."
Horst D. Simon
Date and local
Friday, February, 20 2009, 14h00, room 336 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 20-Feb-2009    Time: 14:00:00    Location: 336 Estimating Local Ancestry in Admixed Populations Eran Halperin International Computer Science Institute (ICSI) Abstract—Large-scale genotyping of SNPs has shown a great promise in identifying markers that could be linked to diseases. One of the major obstacles involved in performing these studies is that the underlying population sub-structure could produce spurious associations. Population sub-structure can be caused by the presence of two distinct sub-populations or a single pool of admixed individuals. In this talk, I will focus on the latter which is significantly harder to detect in practice. New advances in this research direction are expected to play a key role in identifying loci which are different among different populations and are still associated with a disease. Furthermore, the detection of an individual ancestry has important medical implications. I will describe two methods that we have recently developed to detect admixture, or the locus-specific ancestry in an admixed population. We have run extensive experiments to characterize the important parameters that have to be optimized when considering this problem - I will describe the results of thes experiments in context with existing tools such as SABER and STRUCTURE. Date: 29-Jan-2009    Time: 11:00:00    Location: 336 Programming Multicores J. Monteiro Inesc-ID Abstract—As the microprocessor industry switches from increasing clock frequencies and implicit instruction level parallelism to multicores, the free ride for programmers is over. In order to take advantage of the continuing increase of computational power, software development needs to address explicit parallelism. In this talk I will present an introduction to multicore programming, focusing on OpenMP and NUMA. I will cover basic material, but in-depth discussion of particular topics is welcomed
Date and local
Friday, January, 23 2009, 14h00, room 4 at INESC-ID, Lisbon.

Seminars page of INESC-ID
Seminar organized by the ALGOS group (algos.inesc-id.pt)
Date: 23-Jan-2009    Time: 14:00:00    Location: 04 Challenges in the Application of Quantum Mechanics to Biomolecular Problems Ricardo Mata Faculdade de Ciências de Universidade de Lisboa Abstract—The range of application of quantum mechanical methods has been increasing rapidly in the last few years. Today, one is able to study large biomolecular systems at a level of accuracy which a decade ago was only possible for 5-10 atoms. These developments are an outcome of the increasing computer power available to the quantum chemist, but also by new theories/procedures which have helped remove some of the major bottlenecks in the calculations. In this talk, a short introduction to the methods of molecular and quantum mechanics will be given, addressing also their coupling in multi-level calculations. The application of these methods in the study of an enzymatically catalized reaction will be discussed, with a focus on the major computational bottlenecks as well as accuracy. Finally, I will review some of the latest developments on the use of heterogeneous acceleration in the field, namely with Nvidia GPUs and ClearSpeed processors. Date: 21-Jan-2009    Time: 14:00:00    Location: 336 Modelling HIV-1 Evolution under Drug Selective Pressure Anne-Mieke Vandamme Katholicke Universiteit Leuven Abstract—This talk will address methods for the analysis and modeling of HIV evolution, including phylogenetics and the relationship between genotype and phenotype of the HIV virus. Date: 16-Jan-2009    Time: 16:00:00    Location: 336 Kernel methods for the prioritization of candidate genes Yves Moreau Katholicke Universiteit Leuven Abstract—Hunting disease genes is a problem of primary importance in biomedical research. Biologists usually approach this problem in two steps: first a set of candidate genes is identified using traditional positional cloning or high-throughput genomics techniques; second, these genes are further investigated and validated in the wet lab, one by one. To speed up discovery and limit the number of costly wet lab experiments, biologists must test the candidate genes starting with the most probable candidates. So far, biologists have relied on literature studies, extensive queries to multiple databases and hunches about expected properties of the disease gene to determine such an ordering. Recently, the data mining tool ENDEAVOUR has been introduced, which performs this task automatically by relying on different genome-wide data sources, such as Gene Ontology, literature, microarray, sequence and more. A novel kernel method that operates in the same setting is presented: based on a number of different views on a set of training genes, a prioritization of test genes is obtained. A thorough theoretical analysis of the guaranteed performance of the method will also be presented. Finally, the application of the method to the disease data sets on which ENDEAVOUR has been benchmarked, will be reported, showing that a considerable improvement in empirical performance has been obtained. Date: 19-Dec-2008    Time: 11:00:00    Location: 336 Biochemical neutral solutions using S-system models Marco Vilela University of Texas M.D.Anderson Cancer Center Abstract—One of the major difficulties of modeling biological systems from time series is the identification of a parameter set which gives the model the same dynamical behavior of the data. A more austere goal is the identification of the biochemical interaction of the systems components from the model parameters. In this talk, we present a method for the S-System parameter space identification from biological time series based on Monte Carlo process and a parameter optimization algorithm. The proposed methodology was applied to real time series data from the glycolytic pathway of the bacteria Lactococcus latis and ensembles of models with different network topologies were generated. The parameter optimization algorithm was also successfully applied to the same dynamical data however imposing a pre-specified network topology from previous knowledge, foreseeing the method as an exploration tool for test of hypothesis and design of new experiments. Date: 03-Dec-2008    Time: 15:00:00    Location: 04 Charge transport at the surface of organic semiconductors for molecular electronics Helena Alves INESC MN Abstract—The investigation of material systems in which new electronic phenomena arise from the interactions of molecules, is an active topic of research. In particular, electrical transport at the surface of organic materials is a key issue in molecular electronics. Field effect transistors (FET) are not only a powerful tool to measure charge transport at the interface level but are also an essential element in modern electronics. In this talk, a general overview in molecular electronics will be given, with particular emphasis on materials, some device applications and in more detail organic field effect transistors (OFET). Single crystal OFET measurements performed on 3 different systems, TMTSF, PDIF-CN2 and TTF/TCNQ will be presented and some of the key topics in OFETs will be discussed. TMTSF devices show clear signatures of intrinsic transport (high mobility, increasing with lowering temperature) and p-type behaviour. PDIF-CN2 presents n-type transport and very good device characteristics with roomtemperature electron mobility as high as 6 cm2/Vs in vacuum and 3 cm2/Vs in air, the best ntype mobility reported until the moment in an OFET. Finally, it will be introduced a new electronic system created at the interface of two different organic crystals. Despite the fact that the two organic crystals (TTF and TCNQ) are large gap semiconductors and, therefore essentially insulating, their interface turns out to exhibit metallic character, with very high conductivity becoming larger as the temperature is lowered. As the interface assembly process is simple and can be applied to crystals of virtually any conjugated molecule, the combination of molecules with different electronic properties will then enable the assembly of molecular interfacial systems possessing properties that have no analogue in molecular bulk materials. Date: 26-Nov-2008    Time: 12:00:00    Location: Auditório do INESC-Avila, Av. Duque de Ávila, 23 Probabilistic Control and Applications to Gene Regulatory Networks Miguel José Simões Barão Inesc-ID Abstract—Probabilistic control aims to find a probabilistic decision rule in order to control a stochastic dynamic system. In this formulation, both the system model and decision rules are described by conditional probability functions that are iterated together in a closed loop. This kind of formulation fits well in problems where a large number of agents act on the same system simultaneously, one possible application being gene regulatory networks. This talk addresses the formulation of the probabilistic control problem, the optimization of probability functions, and an illustration of its application to gene regulatory network. It is shown that despite the original problem is a high dimensional one, its solution can be computed in a very efficient way. Date: 13-Nov-2008    Time: 15:00:00    Location: 336 Introduction to GPUs Programming using CUDA Carlos Coelho Cadence Research Laboratories Abstract—Graphical Processing Units (GPUs) boast an impressive amount of computational power and memory bandwidth at commodity prices driven low by volume and competition on the gaming consumer market. Due to the introduction of high level APIs, such as NVIDIA's Compute Unified Device Architecture (CUDA), harvesting the computational power of the GPU for general computing applications has become straightforward. In this talk we present an overview of NVIDIA's current GPU architecture. A brief introduction to CUDA and a discussion of performance issues and optimization techniques.

Carlos Pinto Coelho received his Ph.D. in Electrical Engineering and Computer Science from MIT in September 2007. In 2001 he joined the startup company AltraBroadband where he worked on the development of the Nexxim circuit simulator. In 1999 and 2001 he received his engineering and masters degree in Computer and Electrical Engineering from the Instituto Superior Técnico, in Lisbon. His interests include simulation and modeling of physical systems in general and biological systems in particular, artificial intelligence, parallel programming hardware, algorithms, mathematics and physics. Since 2007 he is a researcher at Berkeley Cadence Research Laboratories.

Seminar organized by the ALGOS group (algos.inesc-id.pt) Date: 03-Nov-2008    Time: 11:30:00    Location: 336 Faithful modeling of transient expression and its application to elucidating negative feedback regulation Ron Pinter Technion Abstract—Modeling and analysis of genetic regulatory networks is essential both for better understanding their dynamic behavior and for elucidating and refining open issues. We hereby present a discrete computational model that effectively describes the transient and sequential expression of a network of genes in a representative developmental pathway. Our model system is a transcriptional cascade that includes positive and negative feedback loops directing the initiation and progression through meiosis in budding yeast. The computational model allows qualitative analysis of the transcription of early meiosis-specific genes, specifically, Ime2 and their master activator, Ime1. The simulations demonstrate a robust transcriptional behavior with respect to the initial levels of Ime1 and Ime2. The computational results were verified experimentally by deleting various genes and by changing initial conditions. The model has a strong predictive aspect, and it provides insights into how to distinguish among and reason about alternative hypotheses concerning the mode by which negative regulation through Ime1 and Ime2 is accomplished. Some predictions were validated experimentally, for instance, showing that the decline in the transcription of IME1 depends on Rpd3, which is recruited by Ime1 to its promoter. Finally, this general model promotes the analysis of systems that are devoid of consistent quantitative data, as is often the case, and it can be easily adapted to other developmental pathways. Date: 30-Oct-2008    Time: 15:00:00    Location: 336 Local Properties of Biological Networks Ron Pinter Technion Abstract—The study of biological networks has led to the development of a variety of measures for characterizing network properties at different levels. Global analysis provides summary measures such as diameter, clustering coefficients, and degree distribution that describe the network as a whole, whereas local properties, such as the occurrences of motifs and graphlets allow us to focus on specific phenomena within the network. Local characteristics are suitable to study networks that are incompletely explored; in particular, they faithfully capture the neighborhoods of these parts of the networks that are better studied. In this talk I will describe several methods to analyze both protein-protein interaction (which are undirected graphs) as well as regulation networks (which are directed) along with the biological consequences that they have yielded. Date: 29-Oct-2008    Time: 11:00:00    Location: 336 Solving Techniques and Heuristics for Max-SAT and Partial Max-SAT Josep Argelich Inesc-ID Abstract—Max-SAT is the optimization version of the well know Satisfiability Problem. In this talk we will introduce the Max-SAT problem and the solving techniques used by the most successful state-of-the-art Max-SAT solvers as, for example, the Branch and Bound schema, techniques for improving the lower bound using underestimation and inference, and variable selection heuristics. Next, we will introduce Weighted and Partial Max-SAT and we will see new techniques for these formalisms and adaptations from the techniques used in Max-SAT. Finally, we will present and discuss some results of the last Max-SAT evaluation. Date: 21-Oct-2008    Time: 14:30:00    Location: 04 INTELLIGENT SIGNAL PROCESSING - THE CONFLUENCE OF SIGNAL PROCESSING AND PATTERN RECOGNITION Amitav Das Microsoft Research Abstract—Usually signal processing researchers are happy with their various ways of slicing and dicing the signals to explore various aspects of the signals, while the pattern recognition people are busy looking at various recognition/classification algorithms using whatever “features” from the signal are “given” to them. Usually these two groups of researchers each go their own way. But, for a lot of applications it is important to consider both the feature selection and classification method together which is typically NOT done. For example, MFCC is used in speech recognition as a feature which is supposed to be “speaker-independent” and represent what you are saying. But the same feature is used by people working in speaker identification as well! In my talk, I will give a brief overview of popular and emerging signal processing applications and then pick one of my research areas, namely user-identification, and show how judicious feature selection helps to keep the classification part simple and allows one to develop systems which provide high performance at very low complexity. Date: 21-Oct-2008    Time: 10:30:00    Location: 336 New architectures for the final scaling of the CMOS world Luigi Carro Universidade Federal do Rio Grande do Sul Abstract—As technology scaling reaches the physical limits of silicon, several new problems must be addressed, from the design of low-power but high performance circuits, to the reliability issue of weak transistors and mixed technologies (nanowrires, SET, etc). These technological problems will impact several layers of the current abstraction stack that covers computers and software production. New architectural solutions that explore parallelism at different granularity must be sought, not only for performance/energy trade-offs, but also as a means to assure reliability, fault tolerance and yield, thanks to regularity. This talk presents some ideas on this direction, covering future processor architectures and quaternary logic circuits, discussing technologies that can deal with this multivariable problem.

Luigi Carro was born in Porto Alegre, Brazil, in 1962. He received the Electrical Engineering and the MSc degrees from Universidade Federal do Rio Grande do Sul (UFRGS), Brazil, in 1985 and 1989, respectively. From 1989 to 1991 he worked at ST-Microelectronics, Agrate, Italy, in the R&D group. In 1996 he received the Dr. degree in the area of Computer Science from Universidade Federal do Rio Grande do Sul (UFRGS), Brazil. He is presently a professor at the Applied Informatics Department at the Informatics Institute of UFRGS, in charge of Computer Architecture and Organization disciplines at the undergraduate levels. He is also a member of the Graduation Program in Computer Science at UFRGS, where he is co-responsible for courses on Embedded Systems, Digital signal Processing, and VLSI Design. His primary research interests include embedded systems design, validation, automation and test, fault tolerance for future technologies and rapid system prototyping. He has published more than 150 technical papers on those topics and is the author of the book Digital systems Design and Prototyping (2001-in portuguese) and co-author of Fault-Tolerance Techniques for SRAM-based FPGAs (2006-Springer). For the latest news, please check http://www.inf.ufrgs.br/~carro.

Seminar organized by the ALGOS group algos.inesc-id.pt. Date: 16-Oct-2008    Time: 11:00:00    Location: 336 From Electrical Engineering to the Theory of Fuzzy Sets and Systems Rudolf Seising Munchen Ludwig-Maximilians University Abstract—In 1965, Lotfi Zadeh, a professor of electrical engineering at the University of California in Berkeley, published the first papers on Fuzzy Set Theory. Since the 1980s, this mathematical theory of “unsharp amounts” has been applied with great success in many different fields. Thanks not least of all too extensive advertising campaigns for fuzzy-controlled household appliances and to their prominent presence in the media, first in Japan and then in other countries, the word “fuzzy” has also become very well-known among non-scientists. On the other hand, the story of how Fuzzy Set Theory and its earliest applications originated has remained largely unknown. In this lecture, the history of Fuzzy Set Theory and the ways it was first used are incorporated into the history of 20th century science and technology. Influences from system theory and cybernetics stemming from the earliest part of the 20th century are considered alongside those of communication and control theory from mid-century. Date: 15-Oct-2008    Time: 17:00:00    Location: Instituto Superior Técnico - FA2 Modeling and Verification of Integrated Circuits Nick van der Meijs Delft University of Technology Abstract—Integrated circuits contain millions of electronic switches connected by kilometers of interconnect, on an area of about 1 square cm. The electrical behavior of such circuits strongly depends on the capacitive, resistive and even inductive properties of the interconnect network and the substrate. Since these parasitic properties can only be approximately accounted for during design, it is necessary for verification purposes to translate the layout (physical design) of an integrated circuit back into an electrical netlist. This process is called parasitics extraction. This presentation will first explain the background and challenges of parasitics extraction, to be followed by a review of some recent results for modeling of interconnects and substrate. Specific topics include model order reduction and manufacturing variability. The presentation will conclude with a brief overview of open problems.

Nick van der Meijs (NL, 1959) received his PhD from Delft University of Technology in 1992, where he currently is an associate professor. His teaching responsibilities include circuit theory, VLSI design, and electronic design automation. He is also Director of Studies for the EE program. His research interests circle around physical/electrical aspects of deep-submicron integrated circuits, including ultra-deep-submicron design, modeling and extraction of physical/electrical effects in large integrated circuits, and efficient (practical) algorithms for electronic design automation in general. He is leading a research group on Physical Modelling and Verification of parasitic effects in integrated circuits and is a principal developer of the SPACE layout to circuit extractor. He has served on various program committees of international conferences, is the chair of the IEEE Benelux Circuits and Systems chapter and previous chair of the ProRISC micro-electronics workshop in the Netherlands. He is a recipient of a personal ~0.9M Euros "pioneer" research grant in the Netherlands. (http://ens.ewi.tudelft.nl/~nick/)

Date and local: Tuesday, September, 30 2008, 11h00, room 336 at INESC-ID, Lisbon.

About the speaker: Professor Faramarz Samavati currently works on various aspects of Computer Graphics. In general terms, his research areas are Geometric Modeling , Sketch-Based Modeling, Visualizations and Non-photo Realistic Rendering . More specifically, the research topics in his area are Surface Modeling , Volume Modeling , Subdivision Surfaces , Flexible Projection , Least Squares , NURBS, Multiresolution and Wavelets . Faramarz Samavati currently supervises a group of very good graduate students . He also collaborates with several other researchers ( Richard Bartels , Przemyslaw Prusinkiewicz , Mario Costa-Sousa , Brian Wyvil l, Marina Gavrilova , Sheelagh Carpandale and Joaquim Jorge ). He has over 50 technical papers in peer-reviewed journals and conferences. He is a Member of ACM and EG . Currently he is in the program committee of IMMERSCOM2007, ICIAR 2007, SBIM2007 and SMI2008. Date: 15-May-2008    Time: 09:30:00    Location: Sala de V/C IST da Alameda e TagusPark Fully compressed Sufix Trees Luís M. S. Russo Faculdade de Ciências e Tecnologia da UNL Abstract—Suffix trees are by far the most important data structure in stringology, with myriads of applications in fields like bioinformatics and information retrieval. Classical representations of suffix trees require O(n log n) bits of space, for a string of size n. This is considerably more than the n log_2sigma bits needed for the string itself, where sigma is the alphabet size. The size of suffix trees has been a barrier to their wider adoption in practice. Recent compressed suffix tree representations require just the space of the compressed string plus Theta(n) extra bits. This is already spectacular, but still unsatisfactory when sigma is small as in DNA sequences. In this talk we introduce the first compressed suffix tree representation that breaks this linear-space barrier. Our representation requires sublinear extra space and supports a large set of navigational operations in logarithmic time. An essential ingredient of our representation is the lowest common ancestor (LCA) query. We reveal important connections between LCA queries and suffix tree navigation. Date: 08-May-2008    Time: 17:00:00    Location: 04 Low Power Microarchitecture with Instruction Reuse Frederico Pratas Inesc-ID Abstract—Power consumption has become a very important metric and challenging research topic in the design of microprocessors in the recent years. This work improves power efficiency of superscalar processors through instruction reuse at the execution stage. A new method for reusing instructions that compose small loops is proposed: instructions are first buffered in the Reorder Buffer and reused afterwards without the need for dynamically unrolling the loop, as commonly implemented by the traditional instruction reusing techniques. In order to evaluate the proposed method we modified the sim-outorder tool from Simplescalar and the Wattch Power Performance simulators. Several different configurations and benchmarks have been used during the simulations. The obtained results show that by implementing this new method in a superscalar microarchitecture, the power efficiency can be improved without significantly affecting neither the performance nor the cost. Date: 29-Apr-2008    Time: 11:00:00    Location: 336 Visual style representations for illustrative visualization Mário Costa Sousa University of Calgary Abstract—I will focus on visual style representations for illustrative visualization. As different rendering styles are an effective means for accentuating features and directing the viewer s attention, an interactive illustrative visualization system needs to provide an easy-to-use yet powerful interface for changing these styles. The lecture will review existing approaches for stylized rendering and discuss practical considerations in the choice of an appropriate representation for visual styles.

I will also review the state-of-the-art of sketch-based interfaces and modeling (SBIM) for scientific visualization, including different aspects and inspiration factors brought from traditional medical/scientific illustration principles, methods and practices I will describe unique techniques and problems, including presentation of systems, algorithms and implementation techniques focusing on interactive SBIM for illustrative botanical modeling and volume graphics.

Prof. Mário Costa Sousa Mario Costa Sousa is an Associate Professor of Computer Science at the University of Calgary and member of the Computer Graphics Lab at the University of Calgary. He holds a M.Sc. (PUC-Rio, Brazil) and a Ph.D. (University of Alberta) both in Computer Science. His current focus is on research and development of techniques to capture the enhancement and expressive capability of traditional illustrations. This work involves research on Illustrative scientific visualization, non-photorealistic rendering, sketch-based interfaces and modeling, visual perception, volume rendering, interactive simulations and real-time rendering. Dr. Sousa has been very active in the graphics community, teaching courses, presenting papers and serving on many conference program committees. Sousa has active collaborations with illustrative visualization research groups, medical centers, and scientific institutes and with illustrators/studios affiliated with the Association of Medical Illustrators and the Guild of Natural Science Illustrators. Date: 29-Apr-2008    Time: 09:00:00    Location: Sala de V/C IST da Alameda e TagusPark Aperfeiçoando arquitecturas de distribuição de relógio do tipo "mesh" Gustavo Wilke Universidade Federal do Rio Grande do Sul Abstract—O projecto da rede de relógio é uma tarefa crucial para o projecto dos circuito integrados de alto desempenho. Alem de atingir requisitos de desempenho cada vez mais exigentes, projectistas também necessitam manter o consumo de potência da rede de relógio sob controle. Esta apresentação vai discutir como distribuições de relógio do tipo "mesh" podem ser aperfeiçoadas para reduzir o seu consumo de potência e aumentar o sincronismo do relógio (i.e, reduzir o "clock skew"). Um novo projecto de um "buffer" para o "mesh" de relógio é proposto com o intuito de reduzir o consumo de curto circuito entre os diferentes "buffers" conectados ao "mesh". Para melhorar ainda mais o sincronismo e o consumo de potência dos "meshes" de relógio um algoritmo de dimensionamento estatístico de "buffers" de relógio é também proposto. Resultados experimentais demonstram que o novo projecto do "buffer" de relógio é capaz de reduzir ambos, consumo de potência e "clock skew" em mais de 50%.

Cristiano Lazzari works as researcher in the ALGOS group at INESC-ID. He obtained his Ph.D. in December, 2007 from the Federal University of Rio Grande do Sul (UFRGS), Brazil and from the Institute National Polytechnique de Granoble (INPG), France. The main research area in his Ph.D. was algorithms and techniques for automatic layout generation and radiation-hardened circuit generation. In 2007, Lazzari worked at the CEITEC, Brazil as backend engineer, where he was responsible for logic and physical synthesis of digital circuits.u Date: 17-Mar-2008    Time: 11:00:00    Location: 336 Knowledge discovery in environmental microbiology and physiology: problems, tools and protocols Andreas Bohn Instituto de Tecnologia Química e Biológica Abstract—The present talk deals with dynamical processes observed at the organismal level in conditions close to real-world environments. The relatively small amount of data and replicates available in such experiments poses specific challenges to the design, deployment and application of integrated computational tools for data management and analysis. They are exemplified by microcosm studies of phototrophic biofilms and in-vivo circadian rhythms of body temperature in mammalians. On the basis of these experiences, I will discuss potential alterations to common protocols of interdisciplinary collaboration, which might be useful in enhancing the efficiency of computational tools in knowledge discovery. Date: 13-Mar-2008    Time: 16:00:00    Location: 336 Unsatisfiability-Based Algorithms for Maximum Satisfiability João Paulo Marques-Silva University of Southampton Abstract—The problem of Maximum Satisfiability (MaxSAT) and some of its variants find an increasing number of practical applications in a wide range of areas. Examples include optimization problems in digital system design and verification, and in bioinformatics. However, most practical instances of MaxSAT are too hard for existing branch and bound algorithms. One recent alternative to branch and bound MaxSAT algorithms is based on unsatisfiable subformula identification. This talk provides an overview of recent algorithms for MaxSAT based on unsatisfiable subformula identification. Date: 07-Mar-2008    Time: 11:00:00    Location: 336 Desenvolvimento da Microelectrónica no Brasil João Baptista Martins Universidade Federal de Santa Maria Abstract—A palestra tem por objectivo expor a situação actual da microelectrónica no Brasil, os avanços e perspectivas para médio e longo prazo. O apoio e os incentivos governamentais para a área, os grupos de I&D e a criação de Design Houses, em especial o CEITEC (Centro de Excelência em Tecnologia Electrónica Avançada). Date: 03-Mar-2008    Time: 14:30:00    Location: IST (Alameda) Sala VA-1 Portuguese phonological system used in Beira Interior Sara Candeias Instituto de Telecomunicações, Department of Electrical and Computer Engineering, University of Coim Abstract—This study proposes a model for a phonological description of the speech patterns attested in the Portuguese language variety spoken in Fundão - Beira Interior. The research is based in analytic work of the functionalist theory, the perception of phonetic features and data arriving from statistical analyses. A phoneme database was built for such purposes comprising 142.020 examples, the realizations of which are described and analysed according to the syllabic context. The phonemic database was constructed in order to establish the pertinent features set in the referred variety. This set regulates the dynamic nature of linguistic subsystems, taking into account both the variety of realizations and the optimization of uses. The description of these uses is based in statistical analyses which are presented in relative and absolute values. It is suggested that these phonological phenomena maps may have their correlation in the Verb and Personal Pronoun syntactic-semantic categories. Date: 29-Feb-2008    Time: 15:00:00    Location: 336 What are functional modules in biological networks José Pereira-Leal Instituto Gulbenkian da Ciência Abstract—Modularity has become in recent years a widely accepted feature of biological networks. However, it seems to mean different things in different networks and even within the same type of network. This poses a challenge to the development of methods to partition networks into functionally meaning entities. I will discuss in my talk modularity in the context of protein interaction networks, from method development to evolutionary studies. Date: 28-Feb-2008    Time: 16:00:00    Location: 336 Embedded Cryptography: flexibility and security through reconfigurability Daniel Mesquita Inesc-ID Abstract—This work addresses the security aspects of embedded systems, in different scenarios. Mobile communications, secure identification cards, credit cards and in-vehicle communications are application fields where the data security is a major issue. In this context, the main tool to provide data integrity, confidentiality, authentication and non-repudiation remains the cryptography. Nevertheless, embedded cryptographic systems may lack side channel information, enabling some attacks. This presentation aims to discuss side channel attacks and countermeasures, introducing our working in progress concerning reconfigurable aspects for embedded cryptography. Date: 21-Feb-2008    Time: 15:30:00    Location: IST, Taguspark, Anfiteatro A3 Geração de Testes Baseados em Máquinas de Estados Finitos Prof. Adenilso da Silva Simão Universidade de São Paulo Abstract—Máquinas de Estados Finitos (MEFs) são modelos formais que têm sido utilizados para a descrição de uma ampla variedade de sistemas, desde protocolos até componentes de hardware, passando por classes de software e modelos de interação. No contexto do teste de software e hardware, MEFs têm sido muito utilizadas para geração de casos de teste devido ao fato de permitir a quantificação precisa das falhas que serão descobertas. Neste seminário serão apresentados os principais conceitos da área de teste baseado em MEFs, começando com a fundamentação teórica, e ilustrando os resultados clássicos. Em seguida, será mostrado o estado atual da área, com os resultados recentes. Serão também apresentados os pontos em aberto. Por fim, serão discutidos os pontos de intersecção entre a área de teste baseado em MEFs e o projeto CNPq/Grices. [BIO: Adenilso da Silva Simão é Prof. Doutor na Universidade de São Paulo (USP), Brasil. É um membro do Departamento de Sistemas de Computação do ICMC (Instituto de Ciências Matemáticas e da Computação) da USP em São Carlos desde 2004. Os seus maiores interesses são na área de teste de software e modelos formais.] Date: 21-Feb-2008    Time: 14:30:00    Location: IST, Taguspark, Anfiteatro A3 Navegação Topológica à Inspecção de Linhas Eléctricas Alberto Vale ALBATROZ Engenharia S.A. Abstract—RESUMO: A apresentação é a compilação de alguns projectos no âmbito da robótica móvel. A navegação começa na teleoperação de robots móveis através da Internet, estávamos no final do século passado. Esta navegação resumia-se aos comandos enviados pelo utilizador, cuja percepção se baseava em imagem vídeo e outros dados sensoriais. O desafio seguinte foi tornar a plataforma autónoma, como por exemplo a navegação de um robot móvel em labirintos. Esta navegação é dividida em dois níveis: evitar obstáculos e seguir trajectórias e o segundo, uma navegação de mais alto nível, descobrir e resolver o labirinto. Esta navegação de mais alto nível foi o primeiro passo para uma nova abordagem na robótica móvel, a navegação topológica. A motivação principal centra-se no ser humano que não efectua percursos diários com odómetros, pedómetros, bússolas, ou receptores de GPS, mas sim de forma eficiente com uma navegação por referências salientes no cenário envolvente. A navegação topológica obrigou a uma profunda investigação no processamento dos mais variados dados sensoriais, com especial ênfase nos dados de sensores de varrimento laser, que veio a servir para um problema bem real: a Inspecção de Linhas Eléctricas, também com novos desafios para a robótica aérea. [BIO: Alberto Vale - Licenciatura em Engenharia Electrotécnica e Computadores no Ramo de Controlo e Robótica pelo IST em 1999 e doutoramento na mesma universidade no âmbito da Robótica Móvel em 2005, tendo como orientadora a Professora Maria Isabel Ribeiro. Investigador no laboratório ISR (Instituto de Sistemas e Robótica) entre 1999 e 2005. Docente no departamento de Álgebra e Análise no IST de 2000 a 2002. Formador certificado nas áreas de Electrotecnia, Informática e Tecnologias Educativas. Co-fundador e responsável pela equipa de I&D da empresa Albatroz Engenharia S.A. desde 2006. Diversas publicações científicas e apresentações a nível nacional e internacional. Praticante de diversas actividades desportivas como mergulho, natação, canoagem e gosto pela música, fotografia, desenho e web-design. Date: 21-Feb-2008    Time: 10:30:00    Location: IST, Taguspark, Anfiteatro A3 Aplicações da Computação Reconfigurável no Projecto de Robôs Móveis Prof. Eduardo Marques Universidade de São Paulo Abstract—Este seminário apresentará aplicações da computação reconfigurável na Robótica Móvel. São utilizadas arquiteturas baseadas no conceito SoC (System-on-a-chip) para acelerar a aplicação. Este SoC é implementado em circuitos reprogramáveis do tipo FPGA (Field Programmable Gate Array) de última geração dos fabricantes Altera. A arquitetura alvo é constituída por um softcore Processor NIOS II da Altera, associado a várias unidades de processamento reconfiguráveis (RPUs) desenvolvidas especialmente para a área de robótica móvel. Esta metodologia permite a pesquisadores e projetistas na área da robótica móvel testar seus algoritmos em sistemas de capacidade de desempenho elevada e, deste modo, explorar novas soluções destes sistemas para uso em tempo-real; um requisito cada vez mais presente na robótica móvel embarcada. Os testes de validação do sistema gerado são realizados com um robô Pioneer 3DX. O trabalho atual foca um ambiente de co-projeto hardware/software para facilitar o desenvolvimento de aplicações da robótica móvel em FPGAs. Este projeto teve início em Abril de 2005 através do convênio CNPq/Grices, envolvendo a Universidade de São Paulo e a Universidade do Algarve (tendo neste momento o INESC-ID/IST como parceiro português). [BIO: Eduardo Marques é Prof. Associado na Universidade de São Paulo (USP), Brasil. É um membro do ICMC (Instituto de Ciências Matemáticas e da Computação) da USP em São Carlos desde 1986. Os seus maiores interesses são na área da computação reconfigurável aplicada à robótica móvel.] Date: 21-Feb-2008    Time: 09:30:00    Location: IST/Taguspark, Anfiteatro A3 Identification of Transcription Factor Binding Sites in Promoter Regions by Modularity Analysis of the Motif Co-Occurrence Graph A. P. Francisco Inesc-ID Abstract—Many algorithms have been proposed to date for the problem of finding biologically significant motifs in promoter regions. They can be classified into two large families: combinatorial methods and probabilistic methods. Probabilistic methods have been used more extensively, since they require less input from the user, and their output is easier to interpret. Combinatorial methods have the potential to identify hard to detect motifs, but their output is much harder to interpret, since it may consist of hundreds or thousands of motifs. In this work, we propose a method that processes the output of combinatorial motif finders in order to find groups of motifs that represent variations of the same motif, thus reducing the output to a manageable size. This processing is done by building a graph that represents the co-occurrences of motifs, and finding communities in this graph. We show that this innovative approach leads to a method that is as easy to use as a probabilistic motif finder, and as sensitive to low quorum motifs as a combinatorial motif finder. The method was integrated with two combinatorial motif finders, and made available on the Web, integrated in an application that can be used to analyze promoter regions in S. cerevisiae. Experiments performed using this system show that the method is effective in the identification of relevant binding sites. Date: 31-Jan-2008    Time: 16:30:00    Location: 336 Aggressive Loop Pipelining for Reconfigurable Architectures Ricardo Menotti Universidade Tecnológica Federal do Paraná Abstract—Este seminário aborda a aplicação de novas técnicas de loop pipelining para arquiteturas reconfiguráveis. Inicialmente, as características da computação reconfigurável são descritas e comparadas aos métodos tradicionais da computação. Posteriormente, são descritas as principais técnicas de loop pipelining utilizadas e as ferramentas para geração de hardware encontradas na literatura. Finalmente, é apresentado um método para realizar loop pipelining agressivamente e o impacto dessa técnica quando aplicada a arquiteturas reconfiguráveis. Date: 10-Jan-2008    Time: 17:00:00    Location: IST, Taguspark, A4 Using sequence and expression data to predict microRNA targets in animals Hélio Pais University of East Anglia Abstract—MicroRNAs are short (20-22) nucleotide non-coding RNAs involved in post-transcriptional regulation of gene expression. One of the essential requirements to understand the function of a microRNA is to know the genes it regulates, the so-called targets of the microRNA. It is believed that in plants most target sites present a near perfect complementarity to the sequence of the microRNA. However, in animals most target sites present lower number of complementary nucleotides. It is therefore difficult to design target prediction methods that simultaneously have high specificity and sensitivity. It has recently been discovered that in animals microRNAs not only suppress translation but also induce the destabilization of mRNA transcripts. This discovery has opened up the possibility of using data from microarray assays, performed on cells where microRNA expression is modified, to predict its targets. In the first part of the talk we will review current sequence-based microRNA target prediction tools. In the second part of the talk we will show how microarray data can be used to improve microRNA target prediction. Date: 20-Dec-2007    Time: 17:00:00    Location: 336 Location Proteomics Using Machine Learning Techniques Luis Coelho Carnegie Mellon University Abstract—Fluorescent microscopy is a method by which a labeled protein can be imaged inside a cell. Such images can be used to determine the subcellular location of the protein. I will show how machine learning techniques have been used to automate this task. I will present the basic methods used as well as some more recent work. Date: 18-Dec-2007    Time: 17:00:00    Location: 04 Neuroprosthetic devices: the interplay between electronic and biological systems Eduardo Fernandez Universidade de Alicante Abstract—The interplay between electronic and biological systems is an area on intense interest. Thus, the development of neuroprosthetic devices can have high potential impact on brain research and brain-based industry and is one of the central problems to be addressed in the next decades. This talk will review and summarize the most important physiological principles regarding any neuroprosthetic approach and present a survey of the present state of developments concerning the feasibility of a visual neuroprostheses, as a means through which a limited but useful visual sense could be restored to profoundly blind people. Date: 14-Dec-2007    Time: 10:30:00    Location: 336 Broadband radio access network challenges Prof. Hamid Aghvami Kings College Abstract—The talk will first describe three emerging technologies for next generation broadband radio access networks. It will then discuss the challenges in supporting end-to-end networking. It will also address how to ensure the establishment, maintenance and termination of network edge-to-end QoS and security in a broadband radio access networks . As an example, the design of a wireless access network in the context of end-to-end networking will then be given. Finally, it will discuss possible applications and services for future broadband radio access networks. Date: 10-Dec-2007    Time: 14:30:00    Location: Anfiteatro A2 Speculations on a New Approach to Modeling Biological Systems Eberhard Voit Georgia Institute of Technology Abstract—Computational systems biology complements experimental biology in unique ways that are hoped to reveal insights and a depth of understanding not achievable without systems approaches. A major challenge of systems biology continues to be the determination of parameter values for mathematical models. While some models can be analyzed in symbolic form, these are few and far between, and the lack of parameter values is a true obstacle for most computational analyses of realistic biological phenomena. As a consequence, computational modelers tend to take on a problem only if there is a relatively solid database for parameter estimation. Interestingly, biologists very often have very detailed mental models of the phenomenon they are investigating and are not really interested in absolutely precise numerical results, as long as they can test relevant, semi-quantitative hypotheses. However, neither they nor their modeling colleagues have the means of translating the mental models into numerical mathematical structures that would allow advanced diagnosis and testing. I will speculate in this presentation on a possible way to bridge the cleft between mental and numerical models, using modern methods from Biochemcial Systems Theory. The envisioned technique is tentatively called "concept map modeling" and seems quite reasonable, but I do not have proof yet that it will actually work in real-world applications. Date: 29-Nov-2007    Time: 17:00:00    Location: 336 Media Asset management - O Arquivo Audiovisual da SIC Ana Fanqueira, José Lopes SIC Abstract—As tecnologias digitais, nomeadamente em televisão, estão intimamente relacionadas com a mudança nos processos de comunicação nas empresas desta área na qual a SIC se integra. Um arquivo de televisão, não é já prioritariamente utilizado pela própria televisão mas também pelos outras canais de distribuição de informação ao público, seja em linha na Internet seja por telefone móvel. É neste contexto que se insere a digitalização do Arquivo da SIC. Quando uma notícia acontece há que a distribuir através do canal mais apropriado a cada pessoa, sendo objectivo do grupo Impresa chegar a todas as pessoas. É para esta filosofia que o Arquivo tem que estar preparado e para a qual um sistema de Gestão e Arquivo de Conteúdos Digitais se destina. Texto, gráficos, imagens, em diferentes formatos digitais permitem aos utilizadores total flexibilidade na utilização dos conteúdos arquivados, em qualquer ambiente de produção e/ou distribuição. Date: 26-Nov-2007    Time: 11:00:00    Location: Anfiteatro FA1 Mining Queries Ricardo Baeza-Yates Yahoo Research, Barcelona Abstract—User queries in search engines and Websites give valuable information on the interests of people. In addition, clicks after queries relate those interests to actual content. Even queries without clicks or answers imply important missing synonyms or content. In this talk we show several examples on how to use this information to improve the performance of search engines, to recommend better queries, to improve the information scent of the content of a Website and ultimately to capture knowledge, as Web queries are the largest wisdom of crowds in Internet. Date: 06-Nov-2007    Time: 16:00:00    Location: 336 Using and dealing with immense quantities of data Davi Reis Google Brasil Abstract—No próximo dia 5 de Novembro (segunda-feira), às 11h30, na sala 0.26 do campus Taguspark, terá lugar uma palestra do engenheiro Davi Reis, Google Engineer nos laboratórios de investigação e desenvolvimento do Google Brasil. A palestra será sobre algumas das tecnologias usadas presentemente pelo Google e tem como título 'Using and dealing with immense quantities of data'. A palestra terá lugar após a já anunciada apresentação do prof. Alberto Laender. Date: 05-Nov-2007    Time: 11:30:00    Location: 0.26 Taguspark Um Estudo sobre o Perfil da Produção Científica em Ciência da Computação Alberto H. F. Laender Universidade Federal de Minas Gerais Abstract—Nesta palestra será apresentada uma breve descrição do Projecto Perfil-CC em andamento no Departamento de Ciência da Computação da UFMG e que tem como objectivo estudar o perfil da produção científica na área de Ciência da Computação. O estudo apresenta um levantamento da produção científica de 22 dos mais importantes programas de pós-graduação em Ciência da Computação da América do Norte e da Europa e dos 8 mais importantes programas do Brasil, usando como fontes de dados a DBLP - Digital Bibliography & Library Project sediada na Universidade de Trier, Alemanha, e o Qualis, sistema de classificação de periódicos e anais de conferências da CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior do Ministério da Educação do Brasil. Date: 05-Nov-2007    Time: 10:00:00    Location: 0.26, Taguspark Tackling the Acoustic Front-end for Distant-Talking Automatic Speech Recognition Walter Kellermann University of Erlangen-Nuremberg Abstract—With the ever-growing interest in 'natural' hands-free acoustic human/machine interfaces, the need for according distant-talking automatic speech recognition (ASR) systems increases. Considering interactive TV as a challenging exemplary application scenario, we investigate the structural problems presented by noisy and reverberant multi-source environments with unpredictable interference and acoustic echoes of loudspeaker signals, and discuss current acoustic signal processing techniques to enhance the input to the actual ASR system. Special attention is paid to reverberation, which affects speech recognizers much more than human listeners, and a recently published method incorporating a reverberation model on the feature level of ASR is discussed. Date: 02-Oct-2007    Time: 15:30:00    Location: 336 Acoustic Signal Processing for Next-Generation Multichannel Human/Machine Interfaces Walter Kellermann University of Erlangen-Nuremberg Abstract—The acoustic interface for future multimedia and communication terminals should be hands-free and as natural as possible, which implies that the user should be free to move and and should not need to wear any devices. For digital signal processing this poses major challenges both for signal acquisition and reproduction, which reach far beyond the current state of the technology. For ideal acquisition of an acoustic source signal in noisy and reverberant environments, we need to compensate acoustic echoes, suppress noise and interferences and we would like to dereverberate the desired source signal. On the other hand, for a perfect reproduction of real or virtual acoustic scenes we need to create desired sound signals at the listeners ears, while at the same time we have to remove undesired reverberance and to suppress local noise. In this talk we will briefly analyze the fundamental problems for signal processing in the framework of MIMO (multiple input - multiple output) systems and discuss current solutions. In accordance with ongoing research we emphasize nonlinear and multichannel acoustic echo cancellation, as well as microphone array signal processing for beamforming, interference suppression, blind source separation, and source localization. Date: 01-Oct-2007    Time: 14:00:00    Location: Room C11, IST Compressing Web Graphs as Texts Gonzalo Navarro Universidade do Chile Abstract— The need to run different kinds of algorithms over large Web graphs motivates the research for compressed graph representations that permit accessing without decompressing them. At this point there exist a few such compression proposals, some of them very effective in practice. In this talk we introduce a novel approach to graph compression, based on regarding the graph as a text and using existing techniques for text compression/indexing. This permits accessing the graph efficiently without decompressing it, and in addition brings in new functionalities over the compressed graph. Our experimental results show that our technique has the potential of being competitive with the best alternative techniques, yet not fully satisfactory. Then we introduce a second approach, where we go back to pure compression. By far the best current result is the technique by Boldi and Vigna, which takes advantage of several particular properties of Web graphs. We show that the same properties can be exploited with a different and elegant technique, built on Re-Pair compression, which achieves about the same space but much faster navigation of the graph. Moreover, the technique has the potential of adapting well to secondary memory. Finally, we comment on ongoing work to combine those approaches. The successful scheme can be enriched with succinct data structures so as to permit further graph traversal operations. Date: 28-Sep-2007    Time: 16:30:00    Location: 336 Low Power Microarchitecture With Instruction Reuse Frederico Pratas Inesc-ID Abstract— The power consumption has become a very important metric and researchvtopic in microprocessors design in recent years. In this talk we propose a new method that reuses instructions forming small loops: the loop's instructions are first buffered in the Reorder Buffer and reused afterwards. The proposed method is implemented with the introduction of two new structures in a typical superscalar microarchitecture. In order to evaluate the proposed method, it was implemented and its operation simulated with the Simplescalarv tools. Several different configurations and benchmarks have been used, and the final conclusion is that the implementation of the proposed method in a superscalar microarchitecture improves the power efficiency without significantly affecting the performance. Date: 17-Sep-2007    Time: 16:00:00    Location: 336 SURVEY ON VISUALIZATION OF IMPLICIT SURFACES Bruno Rodrigues de Araújo Instituto Superior Técnico Abstract—Implicit Surfaces are a popular mathematical model used in Computer Graphics to represent shapes used for Modeling, Animation, Scientific Simulation and Visualization.Implicit surfaces provide a smoother and compact model requiring few high-level primitives to describe free-form surfaces becoming a suitable alternative to represent 3D data gathered by 3D scan for re-inverse engineering or medical data from MRI or CT scan for scientific visualization. However, they are hard to display and in order to take advantage of the current graphic pipeline which relies on triangle rasterization, they need to be converted from their continuous mathematical definition to a piecewise polygonal representation. In this work we survey the several techniques for the visualization of implicit surfaces. Starting from the identification of the different types of implicit surfaces used in Computer Graphics, we identify the main class of algorithms for its visualization and the advantages between them. Then, we focus on polygonization methods, since they are the more popular and adapted to nowadays graphic hardware. Since polygonization is a discretization process of implicit surfaces, we present the state of the art of the important issues related with the mesh generation to approximate a continuous model. These issues are related with topological correctness, sharpness and smoothness fidelity and visualization or conversion quality of the resulting polygonal approximation. By doing so, we are able to classify and compare existing visualization approaches using comparison criteria extracted from the several concerns handled by current research work on this area. The analysis of the existing techniques enable us to identify the best strategies to be followed to offer an high quality visualization of implicit surface and the more adequate solutions to overcome existing issues related with the polygonization of implicit surfaces. Date: 13-Sep-2007    Time: 16:00:00    Location: Auditorio Omega, 9º Andar INESC Symmetry Breaking Ordering Constraints Zeynep Kiziltan Università di Bologna Abstract—Many problems in business, industry, and academia can be modelled as constraint programs consisting of matrices of decision variables. Such 'matrix models' often have symmetry. In particular, they often have row and column symmetry as the rows and columns can freely be permuted without affecting the satisfiability of assignments. Row and column symmetries can be very problematic in a systematic search as they grow super-exponentially and create a significant amount of redundancy in the search space. This talk is an overview of my PhD dissertation and it describes some of the first work for dealing with row and column symmetries efficiently and effectively. Row and column symmetry has been recognised by many other researchers as being critical in a wide range of application domains. It is now one of the most active areas of research in symmetry in CSPs. The ordering constraints and the propagators proposed in this dissertation are central to some of the mechanisms proposed for dealing with row and column symmetries. Zeynep Kiziltan is an assistant professor at the department of Computer Science of the University of Bologna in Italy. She received her PhD degree in 2004 from the University of Uppsala in Sweden where she is later appointed as associate professor. The PhD thesis of Dr. Kiziltan has won the 2004 best thesis award of the European Coordinating Committee for Artificial Intelligence. Date: 07-Sep-2007    Time: 11:00:00    Location: 336 Transactional Boosting: A Methodology for Highly-Concurrent Transactional Objects Maurice Herlihy Brown University Abstract—We describe a methodology for transforming a large class of highly-concurrent linearizable objects into highly-concurrent transactional objects. As long as the linearizable implementation satisfies certain regularity properties (informally, that every method has an inverse), we define a simple wrapper for the linearizable implementation that guarantees that concurrent transactions without inherent conflicts can synchronize at the same granularity as the original linearizable implementation. Joint work with Eric Koskinen Maurice Herlihy is Professor of Computer Science at Brown University. His research centers on practical and theoretical aspects of multiprocessor synchronization, with a focus on wait-free and lock-free synchronization. His 1991 paper Wait-Free Synchronization'' won the 2003 Dijkstra Prize in Distributed Computing, and he shared the 2004 Godel Prize for his 1999 paper The Topological Structure of Asynchronous Computation''. Date: 05-Sep-2007    Time: 17:00:00    Location: Alfa 9th Floor Survey on Data Network Congestion Prediction using Data Mining Techniques Luís Ribeiro Siemens Portugal (Alfragide) Abstract—Telecommunication providers often face a very complex problem: how to maximize its Return On Investment (ROI) and keep costumers happy by providing them constantly good QoS levels. To keep costs low operators often accept more customers than theoretically their network resources could accommodate. Normally this is not a problem because most of the customers have a very sparse usage pattern. Even so, in some circumstances, congestion will occur. There are already some well known techniques of congestion avoidance that minimize (until a certain level) the congestion in the network. The most wide used protocol is TCP. On the network core there are active queue management protocols (Random Early Detection - RED - family [2, 3]) or the simple Drop from Tail (DT) that work on the queues of the Network Elements (NE). These protocols perform well, but they still have a small problem: there must be at least some packet loss to cause protocol actions (TCP) or, instead, the protocol action is to cause packet loss (RED or DT), which means QoS degradation. Several approaches have been taken to overcome this problem. One of them is to predict the congestion instead of reacting to it. This study provides a snapshot of the current research on congestion prediction in data networks using Data Mining techniques. Date: 31-Jul-2007    Time: 14:00:00    Location: 336 COMPUTATIONAL DISCOURSE ANALYSIS Instituto Superior Técnico Abstract—Discourse information is significant for several natural language processing tasks. From interpretation to generation, benefits arise when processing goes beyond sentences boundaries. In this survey, we address the main computational theories of discourse, explore how the most pertinent issues are solved and which were the most relevant contributions, examine representative discourse processing systems, and review the main evalution methodologies. Date: 27-Jul-2007    Time: 14:00:00    Location: INESC - Sala 336 ESTADO DA ARTE DE PLATAFORMAS DE COMÉRCIO ELECTRÓNICO PARA GESTORES DE CONTEÚDO Instituto Superior Técnico Abstract—Os sistemas de gestão de conteúdo ( Content Management Systems, CMS) são sistemas de software usados para a publicação de conteúdo em aplicações web (por exemplo sites empresariais, portais), que respondem à necessidade de facilitar a organização, controlo e publicação de um grande volume de documentos ou outro conteúdo. O comércio electrónico na Internet consiste na distribuição, compra, venda, marketing e fornecimento de produtos ou serviços na Internet e tem emergido rapidamente como um dos principais requisitos de negócio das organizações. Neste âmbito, e considerando diferentes abordagens tecnológicas de suporte ao comércio electrónico na Internet, foca-se neste trabalho de investigação o suporte através de sistemas de gestão de comércio electrónico ( e-commerce management systems) e plataformas de comércio electrónico que estendem gestores de conteúdo. Analisa-se o estado da arte dos CMS com suporte a comércio electrónico, a partir de um modelo de referência, que é usado para melhor analisar e comparar os seguintes sistemas, Commerce Starter Kit, osCommerce, VirtueMart e CATALOOK.netStore. Date: 18-Jul-2007    Time: 17:00:00    Location: Sala 336 do INESC-ID ARGUMENTAÇÃO Instituto Superior Técnico Abstract—Os problemas relacionados com a compreensão da Argumentação, bem como o seu papel no raciocínio humano, a sua formalização e respectivas aplicações têm sido estudados por vários campos, nomeadamente da Filosofia, Lógica e principalmente IA. A ideia geral da Argumentação é de que um argumento é aceite se consegue resistir com sucesso aos seus contra-argumentos. As crenças de um agente racional são caracterizadas pelas relações entre os argumentos em que se baseiam nessas mesmas crenças e os argumentos exteriores que as contrariam. Portanto, a Argumentação é de certa maneira baseada numa estabilidade com o exterior que torna os argumentos sugeridos aceites. Para além do estudo do conceito de Argumentação na sua generalidade, este trabalho passou, pela apresentação dos conceitos fundamentais ligados à argumentação baseada em lógica clássica (proposicional) e pelo desenvolvimento da argumentação baseada em lógica não-monótona, utilizando a lógica de omissão de Reiter. Date: 11-Jul-2007    Time: 11:00:00    Location: IST - Pavilhão de Eng. Cilvil, r/c - sala V001 COMPUTAÇÃO BASEADA NO CONTEXTO Instituto Superior Técnico Abstract—Na década 90, Mark Weiser apresentou uma visão na qual os computadores seriam progressivamente integrados nos objectos do nosso quotidiano até se tornarem ubíquos e transparentes. Chamou-lhe Computação Ubíqua. O utilizador passaria a interagir com vários dispositivos móveis e embebidos nos objectos que o rodeiam. Para que os vários sistemas computacionais que o utilizador traz consigo ou que residem no ambiente o possam servir da melhor forma é fundamental que consigam realizar acções com base no contexto. Esta actividade é fácil para os humanos já que usamos intensivamente o contexto para comunicarmos. Na perspectiva dos computadores o contexto pode ser obtido directamente de sensores (i.e. “baixo nível”) ou aplicando a estes dados algum tipo de transformação ou inferência (i.e. “alto nível”). Este documento tem por objectivo apresentar uma visão abrangente da Computação Baseada no Contexto. Para o conseguir recorre a trabalhos desenvolvidos nesta área de investigação e desenvolvimento. Apresentamos aplicações clássicas, pioneiras na abordagem aos desafios que este tema levanta. Em seguida exploramos aspectos como a definição de infra-estruturas genéricas para a recolha, interpretação e distribuição da informação de contexto e o tema da privacidade e confiança. Optámos por analisar mais do que um trabalho para cada tópico com o objectivo de mostrar diferentes abordagens para o mesmo problema. Date: 09-Jul-2007    Time: 17:30:00    Location: sala 336 do INESC ID SOFTWARE AS A SERVICE Instituto Superior Técnico Abstract—Software as a Service (SaaS) has the potential to transform the way information technology (IT) departments relate to and even think about their role as providers of computing services to the rest of the enterprise. It started to circulate in 2000/2001, associated with firms such as Citrix Systems. The emergence of SaaS as an effective software-delivery mechanism creates an opportunity for IT departments to change their focus from deploying and supporting applications to managing the services that those applications provide. A successful service-centric IT, in turn, directly produces more value for the business by providing services that draw from both internal and external sources and align closely with business goals. Date: 09-Jul-2007    Time: 10:00:00    Location: Sala 0.23 - Taguspark PROCESSO DE GESTÃO DE INCIDENTES ITIL NA INDUSTRIA FARMACÊUTICA Instituto Superior Técnico Abstract—Este documento apresenta genericamente o processo de gestão de incidentes de Tecnologias de Informação (TI) da framework ITIL e a sua aplicabilidade à indústria farmacêutica, concretamente à farmacovigilância (vigilância de fármacos). Aplicado às TI, a missão deste processo é restaurar o mais brevemente possível a operacionalidade dos serviços, causando o menor impacto possível na organização. Aplicado à farmacovigilância, a missão é processar as notificações de reacções adversas a medicamentos, e encaminhar a informação para equipas de especialistas das entidades competentes na área da reacção, para que estes executem os procedimentos mais apropriados para a resolução do incidente (e.g. recolha de medicamentos). Date: 09-Jul-2007    Time: 09:00:00    Location: Sala 0.23 - Taguspark Data Format Description Framework – A Descriptive Approach to Data Standardization Xiaoshu Wang Inesc-ID Abstract—Data standardization is fundamentally prescriptive because no information system can solve the data integration issue without enforcing certain rules. The question is, therefore, where the rules should be prescribed. Most existing data standards prescribe the rules over the data itself. However, excessive use of such an approach can easily lead to inefficient data representation. An alternative approach enforces the conforming rules over the description of the data. Under such data standardization, data producers are free to choose the representation of their data but should describe the representation in a standard manner. By developing software libraries that can understand the data description, this descriptive approach will give maximal flexibility in data representation while still ensuring the data interoperability. Date: 05-Jul-2007    Time: 16:00:00    Location: 336 EA, BPM and ORM: Towards Convergence Instituto Superior Técnico Abstract—Most organizations are facing three emerging concerns: Operational Risk Management, Business Process Management and Enterprise Architecture. Although these three disciplines are strongly related to each other, their communities tend to reside in different domains inside an organization, with different vocabularies and without communication. So, the purpose of this article is to demonstrate a possible approach to the convergence of EA, BPM and ORM within an integrated effort, with real value to the organization, using concepts from different theories, since engineering to social sciences. Date: 28-Jun-2007    Time: 17:00:00    Location: 9º andar, Edifício INESC, Rua Alves Redol 9, Lisboa Qualitative Simulation of the Carbon Starvation Response in Escherichia coli Delphine Ropers INRIA Rhône-Alpes Abstract—The adaptation of living organisms to their environment is controlled at the molecular level by large and complex networks of genes, mRNAs, proteins, metabolites, and their mutual interactions. In order to understand the overall behavior of an organism, we must complement molecular biology with the dynamic analysis of cellular interaction networks, by constructing mathematical models derived from experimental data, and using simulation tools to predict the behavior of the system under a variety of conditions. Following this methodology, we have started the analysis of the network of global transcription regulators controlling the adaptation of the bacterium Escherichia coli to environmental stress conditions. Even though E. coli is one of the best studied organisms, it is currently little understood how a stress signal is sensed and propagated throughout the network of global regulators, so as to enable the cell to respond in an adequate way. Using a qualitative method that is able to overcome the current lack of quantitative data on kinetic parameters and molecular concentrations, we have modeled the carbon starvation response network and simulated the response of E. coli cells to carbon deprivation. This has allowed us to identify essential features of the transition between exponential and stationary phase and to make new predictions on the qualitative system behavior following a carbon upshift. The model predictions have been tested experimentally by means of gene reporter systems. Date: 30-May-2007    Time: 17:30:00    Location: Anfiteatro QA1.1 - Torre de Química/IST GPLAB - A Genetic Programming Toolbox for MATLAB Sara Silva FCT, Universidade de Coimbra Abstract—Genetic Programming (GP) is the automated learning of computer programs. Basically a search process, it is capable of solving complex problems by evolving populations of computer programs, using Darwinian evolution and Mendelian genetics as inspiration. GPLAB is a Genetic Programming toolbox for MATLAB. Besides most of the traditional functionalities used in GP, it also implements two additional features: (1) a method for automatically adapting the genetic operator probabilities in runtime, allowing the use of the toolbox as a test bench for new genetic operators; (2) several of the best state-of-the-art techniques for controlling the well known bloat problem, including some that result in automatic resizing of the population in runtime to save computational resources. Combining a highly modular and adaptable structure with the concern for automatic setting of most parameters, GPLAB suits all kinds of users, from the layman who wants to use it as a 'black box', to the advanced researcher who intends to build and test new functionalities. The toolbox and its documentation are freely available for download at http://gplab.sourceforge.net. The latest version ensures minimal compatibility with Octave. Date: 24-May-2007    Time: 16:30:00    Location: 336 Affective Embodied Conversational Characters Catherine Pelachaud University of Paris 8 Abstract—Catherine Pelachaud has prominent work on Embodied Conversational Agents (ECA). Her talk will focus on several aspects regarding the creation of successful ECA, such as: 1) Nonverbal behavior: facial expression, gesture, gaze; 2) Emotion: model of expressive and emotional behavior; 3) Feedback: model of feedback behavior, of listener; 4) Audio-visual speech: lip movement, coarticulation model. Date: 24-May-2007    Time: 14:00:00    Location: 2.8 no Taguspark Técnicas de Tradução de Bytecode Java para Codigo C Instituto Superior Técnico Abstract—FERNANDO MANUEL FERREIRA MIRANDA - Sempre que um programa escrito em Java é executado, existe um custo devido à interpretação ou à compilação Just-in-time (JIT) do código intermédio (bytecodes) que é gerado quando se compila com a ferramenta javac. O objectivo deste projecto consiste em analisar as técnicas de tradução de bytecodes Java para código C. Embora o C seja portável, o objectivo não é transportar o C, mas sim fazer a compilação AOT (Ahead-Of-Time), que significa a compilação antes de o programa ser executado, no sítio onde o programa vai ser executado. Foi feito um estudo do código Java (fonte e bytecodes) e a identificação das técnicas de descompilação necessárias à tradução para linguagem C do código intermédio. Foi também efectuado um estudo das abordagens e técnicas existentes na tradução de código fonte e bytecodes para C. Date: 10-May-2007    Time: 10:30:00    Location: sala 0.16 no Tagus Park Microbial typing methods: databases and quantitative correspondence between typing methods results João Carriço Inesc-ID Abstract—Typing methods are major tools for the epidemiological characterization of bacterial pathogens, allowing the determination of the clonal relationships between isolates based on their genotypic or phenotypic characteristics. Recent technological advances have resulted in a shift from classical phenotypic typing methods, such as serotyping, biotyping and antibiotic resistance typing, to molecular methods such as restriction fragment length polymorphisms (RFLP), pulsed-field gel electrophoresis (PFGE), and PCR serotyping . With the availability of affordable sequencing methods, another shift occurred towards sequence based typing methods such as multilocus sequence typing (MLST) and emm sequence typing (. Sequence based methods have a large appeal since they provide unambiguous data and are intrinsically portable, allowing the creation of databases that, if publicly available through the internet, enable the comparison of local data with that of previous studies in different geographical locations. Ideally an analysis of each typing method, in terms of discriminatory power, reproducibility, typeability, feasibility, and other characteristics, should be performed to better determine which method is appropriate in a given setting. Several molecular epidemiology studies of clinically relevant microorganisms provide a characterization of isolates based on different typing methods. Frequently these studies focus on a comparison between the assigned types of different typing methods, from a qualitative point of view, i.e., indicating correspondences between the types of the different methods. Although this may be useful for the comparison of the genetic backgrounds of the particular set of isolates under study, it does not allow for a broader view of how the results of the different typing methods are related. In this seminar we present the recent work on a online database for a new sequence-based typing method for Staphylococcus aureus and an online tool that implements a framework of measures that allow the quantitative assessment of the congruence for different typing methods results. Date: 12-Apr-2007    Time: 16:30:00    Location: 336 Rationality and Fault-Tolerance Jean-Philippe Martin Microsoft Research Abstract—Abstract: When there is no central administrator to control the actions of nodes in a distributed system, users may deviate for personal gain. How, then, can we design protocols that give any useful guarantee? In this talk I present research done at the University of Texas at Austin. I show how the BAR model accurately describes these environments and, through an example, show how to apply this model to build protocols that provide guarantees despite rational and Byzantine nodes. Date: 26-Mar-2007    Time: 09:30:00    Location: Room 918 (Auditório Alfa, INESC-ID, Rua Alves Redol) Birrell's Distributed Reference Listing Revisited Richard Elliot Jones University of Kent Abstract—The Java RMI collector is arguably the most widely used distributed garbage collector. Its distributed reference listing algorithm was introduced by Birrell in the context of Network Objects, where the description was informal and heavily biased toward implementation. In this paper, we formalise this algorithm in an implementation-independent manner, which allows us to clarify weaknesses of the initial presentation. In particular, we discover cases critical to the correctness of the algorithm that are not accounted for by Birrell. We use our formalisation to derive an invariant-based proof of correctness of the algorithm that avoids notoriously difficult temporal reasoning. Furthermore, we offer a novel graphical representation of the state transition diagram, which we use to provide intuitive explanations of the algorithm and to investigate its tolerance to faults in a systematic manner. Finally, we examine how the algorithm may be optimised, either by placing constraints on message channels or by tightening the coupling between application program and distributed garbage collector. References: Birrell's distributed reference listing revisited Luc Moreau, Peter Dickman, and Richard Jones ACM Transactions on Programming Languages and Systems (TOPLAS), 27(6):1-52, November 2005. Date: 19-Mar-2007    Time: 14:00:00    Location: Auditório Alfa, Sala 918, INESC-ID Enriching Speech Recognition by Recovering Punctuation and Performing Capitalization Fernando Batista Inesc-ID Abstract—This presentation describes my work on inserting punctuation marks and capitalizing the output of an Automatic Speech Recognition System (ASR). The output of an ASR often consists on a raw text, usually in a lower-text format, without any punctuation marks. This work aims to provide more usable transcriptions both for humans and machines. Different experiments were performed: using transducers; the SRILM toolkit; and maximum entropy models. The presentation will describe the advantages and major difficulties on applying each one of the methodologies. Results of experiments conducted both over written newspaper corpora and the speech output will be presented. As this work is not concluded yet, I will present the future work on this matter. Date: 09-Mar-2007    Time: 14:30:00    Location: 336 BiGGEsTS - Biclustering Gene Expression Time-Series Joana Gonçalves Universidade da Beira Interior Abstract—A ferramenta BiGGEsTS - Biclustering Gene Expression Time-Series, tem como objectivo a integração de algoritmos de biclustering para análise de séries temporais de expressão genética. Estes algoritmos abordam o problema de biclustering em dados provenientes de séries temporais de expressão genética de forma directa, isto é, permitem identificar biclusters formados por um conjunto de genes com expressão coerente num subconjunto contíguo dos instantes temporais em análise. Os biclusters identificados poderão posteriormente ser visualizados e estudados usando a aplicação, e segundo várias dimensões de análise, de forma a identificar aqueles que são relevantes do ponto de vista biológico e podem depois ajudar na identificação de módulos regulatórios. Embora existam já ferramentas que integram algoritmos de biclustering aplicados a dados de expressão genética em geral, o desenvolvimento de uma ferramenta para o caso específico das séries temporais, dada a particularidade dos algoritmos e biclustering integrados e dos resultados obtidos é inovador. Neste seminário será apresentada a versão actual da ferramenta e discutidas direcções para trabalho futuro. Date: 01-Mar-2007    Time: 16:00:00    Location: 336 Variation-Aware Timing Analysis Luis Guerra e Silva Inesc-ID Abstract—With IC technology steadily progressing into nanometer dimensions, precise control over all aspects of the fabrication process becomes an area of increasing concern. The impact of process parameter variations in circuit performance is becoming quite significant, particularly in what concerns timing. In this new context, traditional timing verification methodologies are starting to fail. Improving this situation requires tools that are better suited to handle realistic process variations and the complex inter-relations that exist between those variations. In this talk we propose a variation-aware methodology for timing analysis of ICs. We present new delay modeling techniques, where cell and interconnect delays are modeled by affine functions of the process parameters, rather than fixed numeric values. Additionally, we present a variation-aware timing analysis methodology that, using parametric delay models, extends traditional corner-based signoff techniques. Both contributions can be easily integrated into existing timing engines, producing insightful information for effectively guiding manual or automated circuit optimization in a variation-aware fashion. Date: 01-Mar-2007    Time: 14:00:00    Location: 336 A multi-microphone approach to speech processing in a smart-room environment Alberto Abad Gareta Inesc-ID Abstract—Recent advances on computer technology, speech and language processing, or image processing, have made possible that some new ways of person-machine communication and computer assistance to human activities start to appear feasible. Concretely, the interest on the development of new challenging applications in indoor environments equipped with multiple sensors, also known as smart-rooms, has considerably grown in the last times. In the last years the UPC has been participating in the EU funded CHIL project -Computers in the Human Interaction Loop-. The project was mainly aimed to develop intelligent services capable to assist and complement human activities, requiring the minimal possible awareness from the users. Consequently, there was a need of perceptual user interfaces which were multimodal and robust, and which used unobtrusive sensors. My most recent work is precisely related with the acoustic research activities carried out at the UPC in the context of the CHIL project. Particularly, I have been investigating the use of multi-microphone approaches to speech processing as a possible solution to the problems that appear in the deployment of hands-free speech applications in real room environments. First, I will describe some of the work carried out on ASR with microphone arrays. Then, I will also briefly comment my work related to speaker tracking and head orientation estimation. Date: 23-Feb-2007    Time: 14:30:00    Location: 336 Modelo conceptual para auditoria organizacional contínua com análise em tempo real Carlos Alberto Lourenço dos Santos Instituto Superior Técnico Abstract—A relevância que tem vindo a ser assumida pelos sistemas de informação organizacionais na manobra das organizações, em particular na sua capacidade de adaptação contínua a novos desafios com resposta em tempo real, exige que seja dada particular atenção à sua avaliação e validação, no pressuposto de que um sistema de informação avaliado e validado dará garantias aos stakeholders de que a organização merece a sua confiança. A acção de avaliar e validar é um processo de auditoria que, para ser conduzido segundo as boas práticas, deve verificar e avaliar se os objectivos específicos de controlo associados aos diversos pontos de controlo são atingidos com razoável certeza. Assim, é proposto nesta tese um modelo conceptual para auditoria contínua das organizações com análise em tempo real assente em cinco pilares: · recurso à teoria da engenharia organizacional para modelação do negócio, utilizando a framework CEO e a linguagem de modelação UML; · recurso à teoria do controlo interno em conformidade com a framework “Enterprise Risk Management – Integrated Framework”, publicada por COSO “Committee Of Sponsoring Organizations of the Treadway Commission”; · análise micro dos processos de negócio, ao nível das transacções organizacionais, identificando e avaliando o risco que lhes está associado; · batimento dos mecanismos de controlo sobre os processos de negócio; e, · recurso ao verificador de modelos SPIN para validação formal dos modelos de processos de negócio com controlo embebido, como garantia da sua consistência. Os principais contributos científicos fornecidos por esta tese são: projecto consistente e coerente de um sistema de controlo interno; verificação formal dos processos de negócio com controlo embebido de acordo com os objectivos específicos de controlo; modelo conceptual para suporte à auditoria organizacional contínua com análise em tempo real. Palavras-chave: sistema de informação organizacional; engenharia organizacional; transacção organizacional; sistema de controlo interno; verificação formal; auditoria organizacional. Date: 22-Feb-2007    Time: 10:00:00    Location: Anfiteatro do Complexo do IST Ab Initio Protein Structure Prediction using Conformational Search and Information from Known Protein Structures Miguel Bugalho Inesc-ID Abstract—Most of the protein folding methods use information from known proteins to predict protein structure. For homology and fold recognition methods this information is used directly and good results can be obtained if a sufficient similar protein with known structure is found. However, if no such protein is available or for large unmatched regions, ab initio methods can be of great help (specially for small proteins). Our method uses a fragment library and a search technique to create possible structures from which a high scoring set can then be analysed. The search alternates between testing for possible fragments, and choosing stochastically one of the fragments using a score based on current and previous search information. Backtrack is performed if no fragments are available. When a structure is completed, a score is calculated using frequencies of contacts and buried state derived from known proteins. The score information is saved for use in the next structure searches and a new point in the search tree is stochastically chosen for constructing a new structure. The algorithm chooses points in previous constructed structures that had lower scores, trying to improve that structure. Date: 15-Feb-2007    Time: 16:00:00    Location: 425 RF QUADRATURE OSCILLATORS Luís Oliveira Inesc-ID Abstract—Nowadays, the demand for mobile and portable equipment has led to a large increase in wireless communication applications. Modern transceiver architectures, to achieve full integration and low cost, require quadrature oscillators with very accurate quadrature relationship, since quadrature errors affect strongly the overall performance of the RF (Radio Frequency) front-end. Relaxation oscillators are known for their poor phase-noise performance. Here we show that, by cross-coupling two oscillators, the phase-noise performance is improved. Moreover, strong coupling reduces the effect of mismatches and other disturbances: these are attenuated by the feedback, becoming second order effects, and this guaranties a very accurate quadrature. LC oscillators are known for their good phase-noise performance when compared with relaxation oscillators. In a cross-coupled LC oscillator, coupling is necessary for accurate quadrature in the presence of mismatches; however, this degrades the oscillator phase-noise, due to the degradation of the quality factor. Date: 15-Feb-2007    Time: 14:00:00    Location: 425, 4th stage at INESC-ID CARMA - _A_rquitetura _M_PSoC _R_econfigurável para _A_plicações _C_riptográficas Daniel Mesquita Inesc-ID Abstract—A arquitetura CARMA foi concebida com o objetivo de prover recursos criptográficos de com alto desempenho e segurança face aos ataques por canais colaterais. Circuitos criptográficos permitem fuga de informações como consumo de energia, tempo de cálculo e emissões eletromagnéticas, entre outras. Essas informações colaterais podem ser usadas para ataques aos sistemas criptográficos (/Side Channel Attacks/). A arquitetura CARMA alia técnicas de reconfiguraçao dinâmica à uma aritmética baseada na representação RNS para evitar ataques por análise de consumo de energia. Resultados de prototipação mostram o incremento da segurança trazido pela técnica utilizada em conjunto com a CARMA. Essa arquitetura pode ser incluída no contexto da plataforma ARTEMIS, pois é suficientemente genérica para executar outros tipos de aplicações, tais como compressão de dados, detecção/correção de erros (tolerância à falhas) e processamento de imagem. Date: 14-Feb-2007    Time: 14:30:00    Location: IST, TagusPark, sala 0.26 Mining Protein Structure Data José Carlos Almeida Santos Imperial College London Abstract—This presentation will show the application of data mining techniques, in particular of machine learning, for discovery of knowledge in a protein database. The main problem we address is the determination whether an amino acid is exposed or buried in a protein for five exposition levels: 2%, 10%, 20%, 25% and 30%. First we introduce the baseline classifier for this problem which, although very simple (only takes into account the amino acid type), already achieves good prediction results. Then we explain how, by making a local PDB database, retrieving DSSP and SCOP data, we build our classifier to improve the baseline prediction. Finally we test and compare several classifiers (Neural Networks, C5.0, CART and Chaid), and parameters that might influence the prediction accuracy. Namely the level of information per amino acid, the SCOP class of the protein and the neighbourhood of the current amino acid (i.e.: the sliding window size). Keywords: Amino acid Relative Solvent Accessibility, Protein Structure Prediction, Data Mining, BioInformatics, Artificial Intelligence Date: 01-Feb-2007    Time: 16:00:00    Location: 336 A Estatística ao Encontro da Biologia Molecular Lisete Sousa Dept Estatística e Investigação Operacional - Fac Ciências/Univ Lisboa Abstract—A associação entre a explosão de dados relacionados com genética molecular e o avanço tecnológico a nível de meios informáticos é, presentemente, um desafio para o estatístico na medida em que é requerido um melhoramento dos métodos existentes e o desenvolvimento de métodos de inferência mais eficientes para lidar com dados de natureza tão complexa. Pretende-se mostrar como a estatística é fundamental na abordagem de sistemas tão diversos como a estrutura de proteínas e microarrays. Estudos concretos em Biologia Molecular, com objectivos muito específicos, servem de base à apresentação das metodologias estatísticas mais aplicadas nesta área. Salienta-se também a importância do software disponível, nomeadamente alguns packages do R e programas com interface web. Date: 25-Jan-2007    Time: 14:00:00    Location: 336 Introduction to the ISO TC/211 and the Open Geospatial Consortium Miguel A. Bernabé Instituto Superior Técnico Abstract—The ISO/TC 211 (Gegraphic Information / Geomatics) is responsible for the ISO geographic information series of standards. This work aims to establish a structured set of standards for information concerning objects or phenomena that are directly or indirectly associated with a location relative to the Earth. These standards may specify methods, tools and services for data management acquiring, processing, analyzing, accessing, presenting and transferring. The Open Geospatial Consortium is an international industry consortium of more than 335 entities to develop publicly available interface specifications. OpenGIS Specifications support interoperable solutions that geo-enable the Web, wireless and location-based services, and mainstream IT. Date: 25-Jan-2007    Time: 14:00:00    Location: Room FA3 (IST - Alameda) Redes em Chip, um novo paradigma de comunicação intra-chip Mário Pereira Véstias Instituto Superior de Engenharia de Lisboa (ISEL) Abstract—A rede em chip (Network-on-Chip - NoC) é um nova abordagem para o projecto de sistemas num único integrado nos casos em que a comunicação se apresenta como o grande desafio de projecto. Os NoC são organizados como uma rede comutada por pacotes. Esta abordagem usa muitos dos conceitos das redes de computadores e da computação paralela, mas são tidas em conta restrições e compromissos de projecto diferentes, daí a sua especificidade. O seminário pretende introduzir os conceitos associados à rede em chip e quais as motivações subjacentes à sua utilização. Serão ainda apresentadas algumas abordagens ao projecto deste tipo de sistemas, em particular no domínio dos reconfiguráveis. Date: 22-Jan-2007    Time: 14:30:00    Location: sala 0.16, IST, TagusPark Next Generation of MicroBioelectronic Systems Moises Simões Piedade Inesc-ID Abstract—This presentation will show a review of recent research projects aiming the development of advanced micro bioelectronic systems. Interdisciplinary research projects with objectives oriented to the realization of Lab-on-Pocket, Lab-on-Chip and Lab-on-Cell systems will be specially considered. State of research and technological achievements (and difficulties) of the ongoing INESC Biochip Project will be discussed. Finally the new INESC research project BIOMAGCMOS will be presented and explained. Date: 18-Jan-2007    Time: 14:00:00    Location: 336 Aplicações da Computação Reconfigurável no Projecto de Robôs Móveis Eduardo Marques Universidade de São Paulo Abstract—Este seminário apresentará aplicações da computação reconfigurável na Robótica Móvel. São utilizadas arquiteturas baseadas no conceito SoC (System-on-a-chip) para acelerar a aplicação. Este SoC é implementado em circuitos reprogramáveis do tipo FPGA (Field Programmable Gate Array) de última geração dos fabricantes Altera.A arquitetura alvo é constituída por um softcore Processor NIOS II da Altera, associado a várias unidades de processamento reconfiguráveis (RPUs) desenvolvidas especialmente para a área de robótica móvel. Esta metodologia permite a pesquisadores e projetistas na área da robótica móvel testar seus algoritmos em sistemas de capacidade de desempenho elevada e, deste modo, explorar novas soluções destes sistemas para uso em tempo-real; um requisito cada vez mais presente na robótica móvel embarcada. Os testes de validação do sitema gerado são realizados com um robô Pioneer 3DX. O trabalho atual foca um ambiente de co-projeto hardware/software para facilitar o desenvolvimento de aplicações da robótica móvel em FPGAs. Este projeto teve início em Abril de 2005 através do convênio CNPq/Grices, envolvendo a Universidade de São Paulo e a Universidade do Algarve (tendo neste momento o INESC-ID/IST como parceiro português). Date: 14-Dec-2006    Time: 14:30:00    Location: IST, TagusPark, sala 0.26 Architecture and Performance of Dynamic Offloader for Cluster Network Keiichi Aoki University of Tsukuba Abstract—Improved hardware technology for network hardware, a technique to migrate a part of calculations performed by the host CPU into the network hardware is focused by the researchers in the high performance network architecture field, because the recent network hardware obtains high performance processor(s) for controling network protocols. Several researches or products for offloading communication protocol to network device have succeeded in increasing communication performance. However, in these techniques, the host processor still needs to access the network hardware frequently since the unit of offloading is small. This presentation will show a design of software environment to offload the user-defined software modules to Maestro2 cluster network This mechanism is called Maestro Dynamic Offloading mechanism (MDO). MDO allows an application program to offload its major part of communication procedure of the application program to address this problem. The experimental results of the performance evaluation of MDO will be shown by offloading collective communication pertterns. Date: 07-Dec-2006    Time: 16:00:00    Location: 336 Extracting MUCs from Constraint Networks Lakhdar Saïs Université d'Artois Abstract—We address the problem of extracting Minimal Unsatisﬁable Cores (MUCs) from constraint networks. This computationally hard problem has a practical interest in many application domains such as conﬁguration, planning, diagnosis, etc. Indeed, identifying one or several disjoint MUCs can help circumscribe different sources of inconsistency in order to repair a system. In this paper, we propose an original approach that involves performing successive runs of a complete backtracking search, using constraint weighting, in order to surround an inconsistent part of a network, before identifying all transition constraints belonging to a MUC using a dichotomic process. We show the effectiveness of this approach, both theoretically and experimentally. Date: 28-Nov-2006    Time: 16:00:00    Location: 336 Modelling Services in Information Systems Architectures Anacleto Correia Instituto Superior Técnico Abstract—Twenty years ago, Zachman proposed a framework - the Information Systems Architecture - that was certainly one of the main contributions to the Enterprise Architecture research area. More recently, the concept of service was proposed and largely adopted, thus introducing another but fundamental perspective about how organizations not only operate internally but also relate with stakeholders. In this paper we propose an extension to the Zachman framework that incorporates the concept of service Date: 28-Nov-2006    Time: 12:00:00    Location: Sala 0.9 Taguspark Functional organization of chromosomes in the mammalian cell nucleus Ana Pombo Imperial College London Abstract—Chromosomes are not randomly folded in a spaguetti-like state in the mammalian cell nucleus, as initially thought, but occupy distinct territories. Recent studies show that these chromosome territories have preferential arrangements in different cell types, which correlate with the kinds of chromosome rearrangements that occur preferentially in each cell type. Evidence for a growing number of long-range interactions between DNA segments in the same or different chromosomes has raised the possibility of a three-dimensional network of genome interactions. As the long-range interactions described so far correlate with gene activity states, they are likely to influence and be influenced by the transcriptome of each cell type. We propose that this interchromosomal network of interactions contains epigenetic information and determines cell-type specific chromosome conformations and re-arrangements. Date: 27-Nov-2006    Time: 14:00:00    Location: Anfiteatro FA1 - IST Computing Max satisfiable (MSS) and Min Unsatisfiable (MUC) of CNF formulas Cédric Piette Université d'Artois Abstract—In this presentation, a new complete technique to compute Maximal Satisfiable Subsets (MSS) and Minimally Unsatisfiable Subformulas (MUS) of sets of Boolean clauses is introduced. The approach improves the currently most efficient complete technique in several ways. It makes use of the powerful concept of critical clause and of a computationally inexpensive local search oracle to boost an exhaustive algorithm proposed by Liffiton and Sakallah. These features can allow exponential efficiency gains to be obtained. Accordingly, experimental studies show that this new approach outperforms the best current existing exhaustive ones. Date: 27-Nov-2006    Time: 14:00:00    Location: 336 Regular Expression Matching for Reconfigurable Packet Inspection João Bispo Inesc-ID Abstract—Recent intrusion detection systems (IDS) use regular expressions instead of static patterns as a more efficient way to represent hazardous packet payload contents. This presentation focuses on regular expressions pattern matching engines implemented in reconfigurable hardware. We present a Nondeterministic Finite Automata (NFA) based implementation, which takes advantage of new basic building blocks to support more complex regular expressions than the previous approaches. Our methodology is supported by a tool that automatically generates the circuitry for the given regular expressions, outputting VHDL representations ready for logic synthesis. Furthermore, we include techniques to reduce the area cost of our designs and maximize performance when targeting FPGAs. Experimental results show that our tool is able to generate a regular expression engine to match more than 500 IDS regular expressions (from the Snort ruleset) using only 25K logic cells and achieving 2 Gbps throughput on a Virtex2 and 2.9 on a Virtex4 device. Concerning the throughput per area required per matching non-Meta character, our design is 3.4 and 10x more efficient than previous ASIC and FPGA approaches, respectively. Date: 25-Nov-2006    Time: 14:30:00    Location: sala 1.65, IST, TagusPark. Who spoke when Janez Žibert University of Ljubljana Abstract—The thesis addresses the problem of structuring the audio data in terms of speakers, i.e., finding the regions in the audio streams that belong to one speaker and joining each region of the same speaker together. The task of organizing the audio data in this way is known as speaker diarization and was first introduced in the NIST project of Rich Transcription in "Who spoke when" evaluations. The speaker-diarization problem is composed of several tasks. This thesis addresses three of them: speech/non-speech segmentation, speaker- and background-change detection, and speaker clustering. The main objectives in our research were to develop new representations of audio data that were more suitable for each task and to improve the accuracy and increase the robustness of standard approaches under various acoustic and environmental conditions. The motivation for the improvement of the existing methods and the development of new procedures for speaker-diarization tasks is the design of a system for the speaker-based audio indexing of broadcast news shows. Date: 23-Nov-2006    Time: 15:00:00    Location: INESC ID, 4th floor meeting room Linearity improvement techniques for high-speed ADCs Pedro Figueiredo ChipIdea Microelectrónica Abstract—The Analog-to-Digital Converters (ADCs) are the link between the "real" analog world and the tremendous processing and memorization possibilities of the digital circuits. There are several architectures, each suited to a certain sampling frequency / resolution range. This talk will focus on high-speed medium resolution ADCs implemented in CMOS technologies, which are a fundamental building block of many video and communication systems, as well as of optical and magnetic data storage frontends. The linearity that can be achieved by these converters is mainly limited by the offset voltages existing in its constituting blocks. This talk addresses state-of-art offset reduction techniques, and presents results from integrated ADC prototypes. Date: 23-Nov-2006    Time: 14:00:00    Location: 336 Text-Independent Cross-Language Voice Conversion for Speech-to-Speech Translation David Sündermann Universitat Politècnica de Catalunya Abstract—For applications like multi-user speech-to-speech translation, it is helpful to individualize the output voice to make voices distinguishable. Ideally, this should be done by applying the input speaker's voice characteristics to the output speech. In general, a speech-to-speech translation system consists of three main modules: speech recognition, text translation, and speech synthesis. Since the latter, the speech synthesis module, normally is based on a large speech corpus of a professional speaker manually corrected and carefully tuned, the output voice characteristics are static. This is overcome by a fourth module, the voice conversion unit, which processes the synthesizer's speech according to the input voice characteristics. Due to the nature of speech-to-speech translation, input and output voices use different languages leading to the following two challenges: (i) As opposed to state-of-the-art voice conversion, whose statistical parameter training is based on parallel utterances of both involved speakers (text-dependent approach), here we have to rely on text-independent parameter training: There is no way to produce parallel utterances in different languages; (ii) Most voice conversion techniques estimate conversion functions that depend on the phonetic class, either explicitly (e.g. using CART) or implicitly (e.g. using GMM). However, considering different languages, we face different phoneme sets that make it hard to find conversion functions for phonetic units, which are not covered by the other phoneme set. In this talk, I present text-independent voice conversion techniques that are cross-language portable and aim at solving these challenges. In this context, I will (i) introduce a speech alignment technique based on unit selection dealing with non-parallel speech and (ii) show that vocal tract length normalization, which is applied to convert the source voice towards the target, can be directly applied to the time frames without the detour through frequency domain. The techniques' performance is assessed on several multi-lingual corpora in the framework of subjective evaluations. In addition to the evaluation results, speech samples will be used to illustrate the discussed techniques' effectiveness. Date: 17-Nov-2006    Time: 15:30:00    Location: 336 Motion Tracking on Manifolds Jorge Silva Instituto Superior de Engenharia de Lisboa (ISEL) Abstract—There has been growing interest in algorithms capable of learning models from large volumes of multidimensional data, using statistical, geometrical and dynamical information. There are many domains of application for such algorithms, e. g. in exploratory data analysis, computer vision, system identification, control, computer graphics and multimedia databases. While the linear case can be solved by the well-known Principal Component Analysis technique, the non-linear case is more complex. Recently, there have been advances in algorithms that approximate the data through manifold learning. The present works fits this frameworks, with emphasis on the problem of motion tracking - particularly in video sequences - assuming that the whole observation space is not occupied, but rather a manifold embedded in that space. This thesis proposes a manifold learning algorithm, named Gaussian Process Tangent Bundle Approximation (GP-TBA). This algorithm can deal with arbitrary manifold topology by decomposing the manifold into multiple local models, while also providing a probabilistic description of the data based on Gaussian process regression. The model provided by GP-TBA is also used to simplify the motion tracking problem, for which a multiple filter architecture, using e. g. Kalman or particle filtering, is described. The GP-TBA algorithm and the filter bank framework are illustrated with experimental results using real video sequences. Date: 16-Nov-2006    Time: 16:00:00    Location: 336 CMOS Technology Sub-1 V Supply Voltage References Based on Asymmetric Gain Stage Architecture Igor Filanovsky Universidade Aberta Abstract—Voltage references are used in various fields of application as in digital-to-analog converters, automotive industry, battery-operated DRAMs and others. Widely known bandgap references (BGR) are not able to operate when the supply voltage drops below 0.9 V. There is a need (at least, in the future) for voltage references operating with low power supply (say, 0.6 V). Non-bandgap references (NBGR) are promising circuits for low-voltage supplies, yet they are not sufficiently investigated. Date: 16-Nov-2006    Time: 15:30:00    Location: IST (Taguspark) Anfiteatro 3 Network Inference From Co-occurences Mário A. T. Figueiredo Instituto de Telecomunicações (IT) Abstract—We consider the problem of inferring the structure of a network from co-occurrence data; observations that indicate which nodes occur in a signaling pathway but do not directly reveal node order within the pathway. This problem is motivated by network inference problems arising in computational biology and communication systems, in which it is difficult or impossible to obtain precise time ordering information. Without order information, every permutation of the activated nodes leads to a different feasible solution, resulting in combinatorial explosion of the feasible set. However, physical principles underlying most networked systems suggest that not all feasible solutions are equally likely. Intuitively, nodes which co-occur more frequently are probably more closely connected. Building on this intuition, we model path co-activations as randomly shuffled samples of a random walk on the network. We derive a computationally efficient network inference algorithm and, via novel concentration inequalities for importance sampling estimators, prove that a polynomial complexity Monte Carlo version of the algorithm converges with high probability. Date: 16-Nov-2006    Time: 14:00:00    Location: 336 CONTROL OF ANESTHESIA OF PATIENTS UNDERGOING SURGERY J. M. Lemos Inesc-ID Abstract—The use of control techniques to drive biomedical systems is a subject that receives increasing attention. Suitable sensors and actuators, on one side, and progress made in control theory, from which suitable algorithms are derived, made feedback control possible in several situations. The seminar will address the problem of controlling neuromuscular blockade and the level of counsciousness in patients subject to general anesthesia. Adaptive control algorithms, including switched multiple model control and multiple model predictive adaptive control, together with the embedding of sensor fault tolerance will be described, as a means to tackle the uncertainty in the systems to control. Clinical cases obtained at Hospital geral de Santo António (Porto) by the Departamento de Matemática Aplicada (FCUP) that illustrate these algorithms will be presented. Date: 09-Nov-2006    Time: 14:00:00    Location: 336 ANÁLISE DE RISCO Carlos Ferreira Instituto Superior Técnico Abstract—Problemas acontecem todos os dias em todas as áreas, desde o pneu do carro que fura a caminho da reunião com o chefe, ao problema generalizado de representar em software o ano com apenas dois dígitos e que deu origem aos milhares de euros gastos na transição para o ano 2000. Todos estes problemas acabam por resultar de alguma forma em preocupações e perdas de tempo e dinheiro! Devemos então tentar evitar os problemas, ou seja evitar ou controlar os riscos. Actualmente a maioria das organizações compreenderam a importância de controlar os riscos nos seus projectos, no entanto existe pouco conhecimento da forma como deve ser feito. A falta de conhecimento pode dar origem a escolhas de técnicas de gestão de risco inadequadas. Neste trabalho, apesar de não se incluírem todas as metodologias e técnicas existentes, pretendemos elucidar sobre os processos de gestão de risco actuais, salientando, através de um estudo comparativo, algumas vantagens e fraquezas de cada um. Date: 08-Nov-2006    Time: 18:30:00    Location: 336 Extractive summarization of broadcast news Ricardo Ribeiro Inesc-ID Abstract—We present early results from our work on extractive summarization of broadcast news. The feature-based summarizer receives as input the automatic transcription of the news, already divided into stories, and produces as output a summary for each story. The main problems dealt with were sentence segmentation and scoring. Since summary evaluation requires hand-made summaries and/or human grading of the produced summaries, it is left as future work. Date: 03-Nov-2006    Time: 15:30:00    Location: 336 Knowledge Discovery in Genomics and BioIntelligence Research A. Fazel Famili Institute for Information Technology (IIT) - National Research Council of Canada Abstract—Knowledge discovery is the process of developing strategies to discover useful and ideally all previously unknown knowledge from historical or real-time data. Applied to high throughput genomics applications, knowledge discovery processes will help in various research and development activities, such as (i) studying data quality for possible anomalous or questionable expressions of certain genes or experiments, (ii) identifying relationships between genes and their functions based on time-series or other high throughput genomics profiles, (iii) investigating gene responses to treatments under various conditions such as in-vitro or in-vivo studies, and (iv) discovering models for clinical diagnosis/classifications based on expression profiles among two or more classes. This presentation consists of three parts. In part one, we provide an overview of knowledge discovery in genomics and the BioMine project. In part two of this talk we describe some of our case studies using the BioMiner data mining software that we have built in this project. These are all cases in which real genomics data sets (obtained from public or private sources) have been used for tasks such as gene function identification and gene response analysis. We will describe a few examples explaining complexities and challenges in dealing with real data. In the last part of this talk, we share our experiences gained over the last 6 years and describe our current activities and future plans in BioIntelligence research direction. Date: 30-Oct-2006    Time: 10:00:00    Location: 336 A Domain Knowledge Advisor for Dialogue Systems Porfírio Pena Filipe Inesc-ID Abstract—This paper describes ongoing research in order to enhance our Domain Knowledge Manager (DKM) that is a module of a multi‑propose Spoken Dialogue System (SDS) architecture. The application domain is materialized as an arbitrary set of devices, such as household appliances, providing useful tasks to the SDS users. Our main contribution is a DKM advisor service, which suggests the best task‑device pairs to satisfy a request. Additionally, we also propose a DKM recognizer service to identify the domain’s concepts from a natural language request. These services use as knowledge source a domain model, to obtain knowledge about devices and the tasks they provide. The implementation of these services allows the DKM to provide a high‑level and easy to use small interface, instead of a conventional service interface with several remote procedures/methods. These services have been tested into a domain simulator. Our contributions try to reach SDS domain portability issues. Date: 20-Oct-2006    Time: 15:30:00    Location: 336 PET Positron Emission Tomography - Electrónica de Front-End Edgar Francisco Monteiro Albuquerque Inesc-ID Abstract—Instrumentação para medicina é um campo relativamente novo na engenharia e de crescimento muito rápido decorrente dos avanços na última década nas áreas da microelectrónica e particularmente na área dos sensores de estado sólido. Com a finalidade de aproveitar estes novos desenvolvimentos bem como competências, conhecimento e outras sinergias existente em instituições nacionais, foi criado em Dezembro de 2002 o Consortium PET – Mammography liderado pelo LIP, Laboratório de Física de Partículas e constituído por sete instituições nacionais especializadas nas áreas de medicina nuclear, física dos detectores de radiação, biofísica, engenharia médica , electrónica, computação e engenharia mecânica, entre as quais o INESC-ID/INOV. Coube ao INESC-ID/INOV o desenvolvimento dos sistemas electrónicos necessários, e, nomeadamente, ao Grupo de Circuito Analógicos e Mistos o desenvolvimento de circuito integrado para processamento dos sinais provenientes dos sensores, circuito designado por Front-End ASIC. Neste seminário apresentam-se os requisitos para o Front-End ASIC, e a arquitectura proposta. Apresentam-se ainda os blocos desenvolvidos que compõem o sistema, nomeadamente: amplificador, comparador de elevada precisão, memórias analógicas, multiplexers analógicos e controlador do sistema. São apresentados os protótipos fabricados bem como os respectivos resultados dos ensaios laboratoriais. São ainda apresentadas as dificuldades encontradas em integrar um sistemas misto analógico/digital de elevado desempenho deste tipo, contendo blocos analógicos de elevada precisão e digitais operando a frequências elevadas. Por fim analisa-se o estado actual do projecto sendo enumeradas as tarefas ainda a realizar e é apontado o trabalho a realizar no futuro próximo bem como as lições aprendidas no decurso deste projecto. Date: 12-Oct-2006    Time: 14:00:00    Location: 336 Dynamic Entropy-Compressed Sequences and Applications Gonzalo Navarro Departamento de Ciencias de Computación (DCC), Universidad de Chile Abstract—Data structures are called succinct when they take little space (meaning usually of lower order) compared to the data they give access to. A more ambitious challenge is that of compressed data structures, which aim at operating within space proportional to that of the compressed data they give access to. Designing compressed data structures goes beyond compression in the sense that the data must be manageable in compressed form without first decompressing it. This is a trend that has gained much attention in recent years. In this talk we will introduce a simple data structure for managing bit sequences, so that the space required is essentially that of the zero-order entropy of the sequence, and the operations of inserting/deleting bits, accessing a bit position, and computing rank/select over the sequence, can all be done in logarithmic time. Rank operation gives the number of 1 (or 0) bits up to a given position, whereas select gives the position of the j-th 1 (or 0) bit in the sequence. This basic result has a surprising number of consequences. We show how it permits obtaining novel solutions to the dynamic partial sums with indels problem, dynamic wavelet trees, and dynamic compressed full-text indexes. Date: 09-Oct-2006    Time: 16:00:00    Location: 336 Discriminative Modeling in NLP/SLP Christian Weiss Inesc-ID Abstract—Over a long period generative models such as HMMs are state-of-the-art in NLP/SLP. HMMs are successful in various domains. Speech Recognition, PoS-tagging, G2P, TTS etc.... HMMs have some limitations that recent statistical modeling approaches overcome. These statistical learning algorithms can be grouped under Discriminative Modeling or as in Speech Recognition under Discriminative Training. One of those algorithms is the Conditional Random Field approach. The talk gives an overview what a Conditional Random Field is and the difference between Generative Models and Discriminative Models. Date: 06-Oct-2006    Time: 15:30:00    Location: 4th floor meeting room Questões espaciais no traçado de reservas para a protecção da biodiversidade Jorge Orestes ISA - Instituto Superior de Agronomia Abstract—No traçado de redes de áreas protegidas, para além de garantir a representação da biodiversidade, há que ter em consideração requisitos respeitantes às configurações espaciais das reservas. Um desses requisitos consiste em assegurar um certo grau de conexidade ou contiguidade entre as parcelas seleccionadas. Vou apresentar e discutir três problemas relativos à conexidade no desenho de redes para a protecção da biodiversidade. Date: 03-Oct-2006    Time: 16:00:00    Location: 336 Pontuação e capitalização em transcrições de fala Fernando Batista Inesc-ID Abstract—Serão apresentadas algumas experiências realizadas ao longo dos últimos dois meses, no sentido de inserir a pontuação e fazer a correcta grafia a maiúscula (capitalização) em textos provenientes de um reconhecedor de fala. O objectivo do trabalho consiste em avaliar o desempenho dos métodos automáticos na execução destas tarefas e perceber de que forma se podem optimizar. Até ao momento foram feitas experiências utilizando o toolkit SRILM e transdutores. O trabalho ainda não se encontra concluído, pelo que a apresentação se centrará em escrever a metodologia que está a ser empregue, em que condições e os diversos obstáculos que têm surgido. Date: 29-Sep-2006    Time: 15:30:00    Location: 336 Applications of Rewriting-Logic in Reconfigurable Hardware Design Space Exploration Carlos Morra University of Karlsruhe Abstract— Reconﬁgurable architectures are increasingly being used for digital signal processing applications. The typical development process for DSP applications starts with a set of mathematical equations which are manipulated and interpreted by the developer, and then manually translated into a lower abstraction level. The developer must consider many different implementation approaches and parameters in order to obtain the best trade-offs for the given application on the target architecture. The exploration of different approaches and implementation alternatives is a very complex, time consuming and error-prone process which requires a lot of expertise from the developer. To address this problem, a novel tool ﬂow based on rewriting logic is being developed. The talk presents the toolflow and some applications examples. Date: 27-Sep-2006    Time: 16:00:00    Location: 336 Formats and services for data and algorithm interoperation in Bioinformatics Jonas S Almeida University of Texas M.D.Anderson Cancer Center Abstract—Data integration in life sciences is, presently, at a conundrum. On the one hand the diversity of data is increasing as explosively as its volume but on the other hand the value of individual data sets can only be appreciated when enough of those distinct pieces of the systemic puzzle are put together. Consequently, it is just as imperative to have agreeable standard formats as it is that they are not enforced so strictly as to be an obstacle to reporting the very novel data that brings value to systemic integration. In this presentation the emerging use of semantic web technologies is highlighted as regards its practical implications for experimental biology and translational biomedical research. The new integrative technologies create tremendous opportunities for a wider participation by both individual and national initiatives into large scale international research efforts. They also create the challenge of locally developing fluid multidisciplinary capabilities which are still not the norm in the life sciences. A prototypic integrative infrastructure will be demonstrated to illustrate the obstacles and potential of ontology driven data processing that can be freely downloaded open source from www.s3db.org. References Wang X, R Gorlitsky, and JS Almeida (2005) From XML to RDF: How Semantic Web Technologies Will Change the Design of Omic Standards. Nature Biotechnology, Sep;23(9):1099-1103. Almeida et al. (2006) Data Integration Gets 'Sloppy'. Nature Biotechnology Sep;24(9):6-7. Date: 25-Sep-2006    Time: 14:00:00    Location: 336 Geração de funções de ranking usando Programação Genética Marcos André Gonçalves Universidade Federal de Minas Gerais Abstract—A efetividade de um sistema de recuperação de informação depende fundamentalmente da qualidade da função de ordenação (ranking) dos documentos. Até hoje, literalmente, milhares de alternativas de funções de ordenação já foram empiricamente estudadas. Já se sabe também que o comportamento de funções consideradas standard, como TF-IDF e BM25, pode variar de acordo com o contexto (coleção e consultas) para a qual são aplicadas. Em função disso, abordagens que conseguem aprender características específicas deste contexto para gerar uma função de ordenação mais específica, têm conseguido resultados mais efetivos do que as funções standard. Uma dessas abordagens é Programação Genética (GP). Diversos trabalhos utilizam evidências estatísticas da coleção, dos documentos e das consultas como características dos indivíduos. Diferentemente daqueles, este trabalho utiliza evidências mais significativas no lugar de informações estatísticas. Estas evidências foram extraídas de conhecidas funções de ordenação (CCA) e de probabilidades (PROB) de ocorrência de termos e documentos em uma coleção. Os melhores resultados obtidos com estas evidências para a coleção TREC-8, apresentaram ganhos de cerca de 41% na precisão média (MAP) contra BM25 e de quase 18% contra uma abordagem que usa GP a partir de evidências estatísticas. Date: 13-Sep-2006    Time: 15:00:00    Location: Taguspark, anfiteatro A5 “FRAUDE EM TELECOMUNICAÇÕES E SISTEMAS DE CONTROLO DE FRAUDE” PEDRO MIGUEL CARDOSO MIRANDA Departamento de Engenharia Informática Abstract—Neste artigo, começamos por descrever a questão da fraude na área de telecomunicações, explicando quais as suas principais características e problemas. Para melhor contextualizar o problema, são descritas as principais acções e formas de fraude conhecidas, com algumas notas sobre os seus padrões específicos e exemplos concretos, seguido de um quadro resumo comparativo. De seguida, é explicada a arquitectura genérica de um sistema de controlo de fraude, com os principais módulos, fontes de informação e periodicidade de análise, culminando com um resumo comparativo dos principais sistemas comerciais disponíveis actualmente no mercado. Finalmente é feita uma análise e discussão do estado da arte na área, são avançadas um conjunto de opiniões e críticas sobre o que existe e já foi feito, e discute-se a razoabilidade da distinção de situações normais, ainda que atípicas, para aplicação de métodos de análise e detecção inteligentes e automáticas. Date: 28-Jul-2006    Time: 15:00:00    Location: TAGUSPARK – PISO 0 Anfiteatro A3 “AVALIAÇÃO DE SITUAÇÕES PARA GERAÇÃO DE EMOÇÕES” Énio Manuel Dória Pereira Departamento de Engenharia Informática Abstract—A Computação Afectiva é uma área de investigação bastante recente que estuda a influência e uso das emoções nos sistemas computacionais. Entre os vários tópicos abrangidos por esta disciplina encontra-se a síntese de emoções e é neste âmbito que se insere o presente trabalho. Neste trabalho pretendemos estabelecer uma ligação entre um sistema que simula um mundo virtual povoado por agentes e um outro cujo propósito é gerar emoções. Este último baseia-se na teoria de emoção OCC. Esta ligação corresponde a fornecer aos agentes uma função de avaliação que trate, para cada situação vivida pelos agentes, de a avaliar e potencialmente atribuir valores a variáveis, ditas de appraisal, definidas na teoria OCC, com a finalidade de desencadear emoções. Date: 28-Jul-2006    Time: 10:00:00    Location: ANFITEATRO PA-3 (PISO – 1 DO PAV. DE MATEMÁTICA) DO IST TCP – ANÁLISE COMPORTAMENTAL NO ÂMBITO DA QUALIDADE DE SERVIÇO” SÉRGIO MIGUEL PEDRO BERNARDINO Departamento de Engenharia Informática Abstract—Com a evolução das redes informáticas novos desafios surgem a cada dia. Com o aparecimento de cada vez mais aplicações de rede com uma maior sensibilidade ao tipo de tráfego que requerem para funcionar correctamente, o campo da qualidade de serviço torna-se cada vez mais central e importante. Paralelamente, há que considerar não só o surgimento de novas tecnologias, mas também a evolução de tecnologias já existentes. O Transmission Control Protocol, elemento fundamental da pilha de protocolos de rede, e o protocolo de eleição para o transporte, com confiança, de dados, é um protocolo que tem de coexistir e adaptar-se (ao mesmo tempo que mantém as suas características de bom funcionamento) com todas estas mudanças. Neste trabalho vamo-nos centrar no TCP numa perspectiva baseada na qualidade de serviço. O nosso objectivo é o de analisar, de uma forma crítica, as limitações e potencialidades do protocolo num ambiente de rede limitado por requisitos de qualidade de serviço. Date: 27-Jul-2006    Time: 15:00:00    Location: 336 “NETWORK AND APPLICATION LAYER MOBILITY: A COMPARATIVE STUDY” Jorge Valadas Departamento de Engenharia Informática Abstract—Applications have become interwoven with our everyday lives, we now more than ever feel the need to be able access them anywhere we are. In order to allow users to have such a behaviour mobility must be supported. This article analyses two possible levels at which mobility can be supported. We first explain the reasons which lead to the need for mobility at each of these levels and follow with an analysis of several of the mobility architectures available, both at the network and the application layer. A qualitative comparison is then made of the solutions and conclusions are drawn regarding the situations in which they should be used. Date: 25-Jul-2006    Time: 16:00:00    Location: INESC – 9º PISO Auditório Alfa (Sala 918) “POLICY MODELS FOR NETWORK AND SERVICES MANAGEMENT” Laércio Junior Departamento de Engenharia Informática Abstract—This paper studies the evolution of policy-based management and examines some of the proposed architectures for the management of networks and services, evaluating their advantages and disadvantages. A comparison is made among three representative models on general architecture attributes, policy-related attributes, and general evaluation metrics for scalability, reliability and performance. This analysis leads to the observation that there are important shortcomings in all models, among them the limited scope of the policies used. This is a major issue for provision of end-to-end quality of service (QoS) in heterogeneous, multi-provider networks. Using the information collected, and considering the necessities of modern Internet applications, a model for a policy-based management architecture is derived. It is scalable, technology independent and capable of providing end-to-end QoS. Date: 25-Jul-2006    Time: 15:00:00    Location: INESC – 9º PISO Auditório Alfa (Sala 918) “FUNCTION MODELING IN ORGANIZATIONAL ENGINEERING” David Sardinha Andrade de Aveiro Departamento de Engenharia Informática Abstract—This thesis has the main purpose of assessing the usefulness to model organizational functions in the context of organizational engineering. To achieve this we present an extensive analysis of current insights found in literature of diverse fields of knowledge, like management, engineering, biology and philosophy, to bridge the gap between the several different perspectives and clarify what in fact are organizational functions and what should be considered artifacts of the functional dimension of an organization. Based on current findings and on previous work done in the field of organizational engineering, an ontology is proposed, for the purpose of modeling the functional concern of an enterprise architecture, in a coherent manner. Namely, in organizations, representing a function means specifying, for a certain process X, its interdependencies with other parts of the organization, which contribute to its self-maintenance, namely: (1) a norm (goal value) for a certain state variable of the process; (2) which other process (or processes) depend on this norm, in order to remain functional; (3) the set of business rules - embedded in the process itself, or other process(es) - that work as resilience mechanisms to expected exceptions and try to reestablish the norm to the process functioning; (4) set of specialized and accumulated knowledge related to process X's domain used for treatment of unexpected exceptions in a special dynamics called microgenesis. We further propose an extension to an existing modeling framework that argues that the multidimensional aspects of the enterprise should be organized into five architectural components: Organization, Business, Information, Application and Technological architectures. The extension we propose is an additional architectural view: the Function Architecture. This architecture allows the modeling of organizational functions separating it's inherent concerns of: operation, monitoring, resilience and microgenesis, while maintaining coherence in their components and interconnections. Several benefits seem to arise out of this proposal, like: simplification of organizational models thanks to the separation of concerns; increased traceability between fundamental entities of organizations and model elements reutilization; detection of vital processes and gaps on the organization's self maintenance mechanisms; among others. The usefulness of these possible benefits will be assessed in the final stage of this thesis which aims at a practical experiment on modeling functions in real organizations, with the proposed architecture, in the context of at least two case studies, already planned for execution. Keywords: Function Modeling, Organizational Engineering, Organizational Modeling, Organizational Function Date: 24-Jul-2006    Time: 10:00:00    Location: INESC – Auditório Alfa 9º piso Ambrósio como auxiliar à execução de tarefas culinárias Filipe Miguel Fonseca Martins, Pedro Jorge Fino da Silva Arez Inesc-ID Abstract—Ambrósio como auxiliar à execução de tarefas culinárias. Date: 21-Jul-2006    Time: 15:30:00    Location: 336 ALGORITHMS FOR LINEAR PSEUDO-BOOLEAN OPTIMIZATION Vasco Manquinho Departamento de Engenharia Informática Abstract—New algorithms for Pseudo-Boolean Optimization have been motivated by the recent advances in Propositional Satisfiability (SAT) algorithms. These new techniques developed in SAT algorithms are powerful mechanisms in manipulating problem constraints, but they are not effective in dealing with information from the cost function in Pseudo-Boolean Optimization problem instances. In this dissertation we propose a new algorithmic framework for solving the Pseudo-Boolean Optimization problem. We start by introducing a new algorithm that integrates SAT-based techniques such has non-chronological backtracking, Boolean constraint propagation and constraint learning with classical branch and bound techniques, namely the use of lower bound estimation procedures on the value of the cost function. Moreover, we provide conditions for using several lower bound estimation procedures with SAT-based techniques and introduce the notion of bound conflict learning. Finally, we also propose the use of cutting plane techniques commonly used in Integer Linear Programming within a SAT-based framework. Experimental results show that our algorithm is more effective in solving several Pseudo-Boolean Optimization problem instances and provide a significant contribution to this area. Date: 20-Jul-2006    Time: 14:00:00    Location: ANFITEATRO DO COMPLEXO I DO IST Representing Policies for Quantified Boolean Formulas Daniel le Berre CRIL - Faculté Jean Perrin, Université d'Artois Abstract—The practical use of Quantified Boolean Formulas (QBFs) often calls for more than solving the validity problem QBF. For this reason we investigate the corresponding function problems whose expected outputs are policies. QBFs which do not evaluate to true do not have any solution policy, but can be of interest nevertheless; for handling them, we introduce a notion of partial policy. We focus on the representation of policies, considering QBFs of the form \\\\forall X \\\\exists Y \\\\phi. Because the explicit representation of policies for such QBFs can be of exponential size, descriptions as compact as possible must be looked for. To address this issue, two approaches based on the decomposition and the compilation of \\\\phi are presented. Date: 20-Jul-2006    Time: 10:00:00    Location: 336 “EVALUATION OF MDE TOOLS FROM A METAMODELING PERSPECTIVE” João Saraiva Departamento de Engenharia Informática Abstract—Ever since the introduction of computers into society, researchers have constantly been trying to raise the abstraction level at which we write software programs; the first computer programming languages, structured languages and object-oriented languages are examples of this. We are currently adopting a new abstraction level based on models instead of source code: Model-Driven Engineering (MDE). This new abstraction level is the driving force for some recent modeling approaches, such as OMG's Unified Modeling Language (UML) or Domain-Specific Modeling (DSM). But MDE and all its approaches are founded on metamodeling, the definition of a language representing a problem-domain and then the usage of that language to create models. A key factor for the success of an approach is appropriate tool support; this has been the case with UML and DSM. However, it was only recently that tool creators started considering metamodeling as a first citizen issue in their list of greatest concerns and priorities. In this paper, we evaluate a set of MDE tools from the perspective of the metamodeling activity. This evaluation is focused on both architectural and practical aspects of modeling and how the metamodeling activity is supported. Then, using the results of this evaluation, we discuss the current status of MDE tools and the direction that tool creators seem to be taking. Date: 18-Jul-2006    Time: 15:00:00    Location: INESC – 9º Piso Auditório Alfa (sala 918) “MÉTODOS DE BICLUSTERING PARA A IDENTIFICAÇÃO DE MECANISMOS DE REGULAÇÃO GENÉTICA” Ana Ramalho Departamento de Engenharia Informática Abstract—A quantidade de informação disponível sobre sistemas biológicos aumentou intensamente nos últimos anos, tornando o mecanismo de regulação da expressão dos genes o problema chave desta era. A técnica de biclustering, associada a dados de expressão de genes, tem sido muito utilizada para uma melhor compreensão deste mecanismo. Com esta técnica pretende-se identificar processos biológicos nos quais estão envolvidos os genes dos biclusters e, tentar decifrar redes de regulação pela análise do conjunto dos biclusters obtidos. No entanto, este problema continua em aberto, pois esta metodologia ainda não permitiu uma compreensão completa e correcta sobre o mecanismo de regulação. Neste trabalho é descrita uma nova metodologia que utiliza dados de regulação da transcrição para identificar agrupamentos de genes que são regulados por um conjunto comum de factores de transcrição. Para tal, foi desenvolvido um algoritmo que identifica biclusters constantes numa matriz de regulação binária. Esta metodologia foi aplicada a dados de regulação do organismo Saccharomyces cerevisiae. Os resultados obtidos para as regulações documentadas evidenciaram em geral agrupamentos com importante significado biológico, uma vez que foram agrupados genes que faziam parte dos mesmos processos biológicos. Os resultados obtidos para as regulações potenciais são mais complexos na sua análise mas abrem portas para a identificação das funções dos genes e para a compreensão dos processos biológicos onde estes estão envolvidos. Date: 18-Jul-2006    Time: 14:00:00    Location: ANFITEATRO PA-3 (PISO – 1 DO PAV. DE MATEMÁTICA) DO IST Recent work by Jean-Luc Rouas Jean-Luc Rouas INRETS Abstract—Jean-Luc Rouas will tak about two recent research trends: - Detection of audio events for surveillance in public transportation - Identification of dialects using prosodic cues Date: 11-Jul-2006    Time: 15:30:00    Location: 336 DOTTED SUFFIX TREES: A STRUCTURE FOR APPROXIMATE TEXT INDEXING Luis Coelho Departamento de Engenharia Informática Abstract—The problem we address is text indexing for approximate matching. We consider that we are given a text T which undergoes some preprocessing to generate an index. We can later query this index to identify the places where a string occurs up to a certain number of allowed errors k, where by error we mean the substitution, deletion or replacement of one character (edition or Levenstein distance). We present a structure for indexing which occupies space O(n log^k n) in the average case, independent of alphabet size, n being the text size. This structure can be used to report the existence of a match with k errors in O (3k mk+1 ) and to report the occurrences in O(3^k m^{k+1} + ed) time, where m is the length of the pattern and where ed the number of matching edit scripts. These bounds are independent of alphabet size. The construction of the structure has time bound by O (k N |S|), where N is the number of nodes in the index and |S| the alphabet size. Date: 06-Jul-2006    Time: 16:30:00    Location: 336 ”PLANEAMENTO DE RECURSOS MÓVEIS POR MELHORAMENTO ITERATIVO” Fausto Jorge Morgado Pereira de Almeida Departamento de Engenharia Informática Abstract—Proponho uma abordagem de melhoramento iterativo da Inteligência Artificial para o planeamento de turnos para tripulantes, mas adoptando ideias da Investigação Operacional. Nesta abordagem, os planos são melhorados de acordo com objectivos de melhoramento bem definidos, num espaço abstracto de meta-operadores. Ao contrário dos operadores convencionais ou de macro-operadores, os meta-operadores permitem dar grandes saltos no espaço de estados e evitar ficar preso em mínimos locais. A sua utilização constitui uma inovação na abordagem de melhoramento iterativo, substituindo com vantagem os outros tipos de operadores. Cada meta-operador resolve um sub-problema de menor dimensão que o problema original, usando um solucionador construtivo adequado. Para definir um sub-problema é seleccionado um conjunto de turnos a melhorar, de acordo com um determinado objectivo, e usa-se as suas actividades. O solucionador deve encontrar uma forma diferente de combinar estas actividades em novos turnos, que esteja mais próxima dos objectivos de melhoramento da procura global. Com este método consegue-se, seguindo uma abordagem de caixa-branca, reparar ou optimizar planos de forma eficaz. O sistema resultante, o SMI, foi testado com vários problemas fornecidos por uma empresa ferroviária europeia, e os resultados foram comparados com os obtidos pelos seus planeadores e por um sistema da vanguarda industrial. Date: 05-Jul-2006    Time: 14:00:00    Location: ANFITEATRO DO COMPLEXO I DO IST The Cost of Search? Toby Walsh National ICT Australia and University of New South Wales Abstract—Whilst waiting for a search procedure like a TSP or SAT solver to finish, you might ask yourself a number of questions. Is the search procedure just about to come back with an answer, or has it taken a wrong turn? Should I go for coffee and expect to find the answer on my return? Is it worth leaving this to run overnight, or should I just quit as this search is unlikely ever to finish? To help answer such questions, we propose some new online methods for estimating the size of a backtracking search tree. Date: 29-Jun-2006    Time: 10:30:00    Location: 336 Leak Resistant Architecture: Statements and Perspectives Daniel Mesquita Laboratoire d´Informatique, de Robotique e de Microélectronique de Montpellier (Jean-Claude Bajard) Abstract—Hardware implementations of cryptographic algorithms may leak some information as computing time, electromagnetic emissions and power consumption. Based on this information, some kind of attacks can be performed to recover cryptographic keys. This presentation shows two approaches to thwart some Side Channel Attacks (SCA). The first one is an analog hardware countermeasure that counteracts SCA that not requires any modification on the cryptographic algorithm, the messages or keys. The second method concerns a combination of reconfigurable techniques with the recently proposed Leak Resistant Arithmetic (LRA) to thwart SCA based on power analysis. The main aim of this approach is to perform modular multiplication and exponentiation, the most significant cryptographic operations, by randomly change the intermediate results of a cryptographic computation. In this way SCA based on power analysis is no longer efficient. This approach resulted in a Leak Resistant Reconfigurable Architecture (LR²A). Both method were simulated and synthesized for the CMOS 0.18µ technology. A short version of the LR²A was prototyped in FPGA and a SCA attack was performed to show the efficiency of the new architecture. Date: 23-Jun-2006    Time: 14:00:00    Location: 336 Parsing Conversational Speech Mari Ostendorf University of Washington Abstract—With recent advances in automatic speech recognition (ASR), there are increasing opportunities for natural language processing of speech, including applications such as speech understanding, summarization and translation. Parsing can play an important role here, but much of current parsing technology has been developed on written text. Spontaneous speech differs substantially from written text, posing challenges for parsing that include the absence of punctuation and the presence of disfluencies and ASR errors. Prosodic cues can help fill in this gap, and there is a long history of linguistic research indicating that prosodic cues in speech can provide disambiguating context beyond that available from punctuation. However, leveraging prosodic cues can be challenging, because of the many roles prosody serves in speech communication. This talk looks at means of leveraging prosody combined with lexical cues and ASR uncertainty models to improve parsing (and recognition) of spontaneous speech. The talk will begin with an overview of studies of prosody and syntax, both perceptual and computational. The focus of the talk will be on our work with a state-of-the-art statistical parser, discussing the issues of sentence segmentation, disfluencies, sub-sentence prosodic constituents, and ASR uncertainty. In addition, we show how these issues impact the use of parsing language models in ASR. We conclude by highlighting challenges in speech processing that impact parsing, including tighter integration of ASR and parsing, as well as portability to new domains. Date: 21-Jun-2006    Time: 11:00:00    Location: IST, Torre Norte, Anfiteatro Ea3 Speaker Characterization with MLSFs Hugo Cordeiro Instituto Superior de Engenharia de Lisboa (ISEL) Abstract—The work described in this paper concerns the analysis of an alternative feature for speaker characterization, in the context of speaker recognition: Line Spectrum Frequencies (LSF), but derived from mel-filter bank energies. This new feature, that we denominate mel-LSFs (MLSFs), shows similar performance comparing to MFCCs for male speakers, one of the most common feature found in speaker recognition, but for female speakers MLSFs performs better than MFCCs. When combined with mel LSFs differences, MLSFs feature overcomes the performance of the MFCCs for male and female speakers, even with temporal delta, ?MFCCs, included. Performance is measured in the context of speaker verification, using EER and minimum HTER. Detection error threshold (DET) curves are also presented, as well as HTER curves. The main objective of this study is to compare different features performances with a common framework, from what a standard support vector machine recogniser was developed. Tests are based on the cellular component of the “2002 NIST Speaker Recognition Evaluation Corpus. Date: 16-Jun-2006    Time: 15:30:00    Location: 336 “Qualidade de Serviço em Web-Services” Ricardo Manuel Ferreira Seabra Gomes Departamento de Engenharia Informática Abstract—Os Web Services surgiram como uma nova tentativa para tornar os vários serviços disponibilizados numa arquitectura distribuída inter-operáveis, utilizando para isso tecnologias que tivessem sido previamente normalizadas para comunicação entre os vários elementos intervenientes. Apesar de ser uma tecnologia nova, já existem diversos sistemas no mercado que utilizam Web Services. Com a entrada em produção destes sistemas, surge a necessidade de introduzir parâmetros para auferir da qualidade dos serviços prestados. No entanto, a especificação, tal como foi desenvolvida, não contém nenhuma forma de definir parâmetros de qualidade de serviço na descrição dos serviços prestados. Este facto levou a que fossem feitas diversas propostas com esse mesmo objectivo por parte de diversos fabricantes, tais como a IBM, HP ou Microsoft. No âmbito deste trabalho será efectuado um estado da arte das propostas actualmente existentes de Web Services com suporte de qualidade de serviço, quer a nível de descrição do serviço, quer a nível de arquitectura. É ainda dada uma perspectiva da forma como se pode garantir qualidade de serviço extremo-a-extremo, realizando um mapeamento entre o suporte de qualidade de serviço de nível aplicacional e aquele que a rede oferece. Date: 07-Jun-2006    Time: 09:00:00    Location: INESC – Auditório Alfa (sala 918) 9º piso Design for Testability and 0 PPM Strategies: Industrial Experience Anton Chichkov AMIS Abstract—Cost-effective test of semiconductor products in DSM (Deep Sub-Micron) technologies is a challenging problem. High-quality test, leading to extremely low defect levels, or escape rates (in the order of few ppm (parts per million)) requires a unified approach of intelligent management of different test strategies. Digital test is difficult, analog and mixed-signal design and test is even more demanding. The author works in worldwide leading company and will address, from an industrial point of view, the following topics: Need for test Cost of test and DFT (Design for Testability) Link yield coverage and PPMs Challenges for 0 PPM strategies Some real cases of RMA Some possible directions to go with DFT Addressing Bridging Faults Addressing Open Faults Analogue BIST Overview of state of the art in the industry for test development and coverage Date: 01-Jun-2006    Time: 11:00:00    Location: IST, Torre Norte, Anfiteatro EA4 An Introduction to Grid Computing Tiago Manuel da Cruz Luís, João Rui Mariano Leal Inesc-ID Abstract—We present our work in the development and installation of a grid system in the Spoken Language Systems Lab. We will give a brief notion about what is grid computing, and what were the platforms used in our work, namely Condor and Globus. Finally, we present examples of how a grid system can be used and what are its benefits. Date: 26-May-2006    Time: 15:30:00    Location: 336 “TOWARDS PREDICTIVE MODELS FOR E-LEARNING” Maria Alexandra Rentroia Bonito Departamento de Engenharia Informática Abstract—The doctoral work outlined in this document is part of an ongoing research program conducted at Instituto Superior Técnico (IST) by GELO (Group for E-Learning in Organizations). Its main objective is to analyse the relationship between motivation-tolearn, usability of learning systems, and learning outcomes by using an integrated usability evaluation framework. The main research question is: Do e-learning programs, designed taking into account usability and motivation-to-elearn aspects, positively enhance learning outcomes? If so, what design variables are more relevant to motivate what groups to engage in e-learning? We propose a conceptual framework and its embodiment in an e-learning system. To assess its effectiveness, we developed an usability evaluation method to drive empirical tests. This evaluation method combines quantitative and qualitative measures and fosters design-oriented user feedback along the learning process. The characteristics of the proposed holistic evaluation method address some identified weaknesses in usability evaluation studies, such as the lack of: (a) an integrated conceptual approach to evaluate e-learning systems’ usability in the context of use, and (b) a design tool to bridge the designer-user communication gap in a cost-effective manner. Preliminary results, garnered over the course of the last two years, with a small subject population, suggested that the proposed evaluation method allows an structured and design-oriented assessment on usability of e-learning systems taking into account learners’ motivation to e-learn. The proposed research work will test and validate these results by using a larger population, to yield a workable approach to predict e-learning outcomes. The main expected contribution of this research work is an empirically tested usability evaluation method that allows development teams to anticipate the impact of learners’ motivation and usability of e-learning systems on outcomes. This will contribute to support development teams’ decision-making process when designing e-learning experiences, focusing their effort on online learners’ valued items in specific learning situations. Date: 25-May-2006    Time: 14:30:00    Location: Sala de Reuniões do DEI CABL: Conteúdos Audiovisuais para Banda Larga Inês Oliveira Universidade Lusófona de Humanidades e Tecnologias Abstract—A maior parte das aplicações multimédia requer o acesso ao conteúdo propriamente dito, veja-se por exemplo o caso da TV Interactiva, notícias personalizadas ou mesmo vídeo a pedido. Face a isto é imprescindível aceder à informação audiovisual em termos de conteúdo, caso contrário a elevada quantidade de vídeo torna-se um entrave à sua recuperação. O acesso ao conteúdo audiovisual permite recuperar áudio, vídeo e imagens de forma automática, se bem que seja um desafio bastante complexo. Em primeiro lugar, porque a informação audiovisual se caracteriza pelo seu grande volume de dados e pela sua heterogeneidade. Em segundo, porque as representações com base em conteúdo descrevem a informação audiovisual com base em propriedades visuais ou acústicas (cores, texturas, movimentos, frequência, etc.) e não em propriedades semânticas. A produção automática ou semi-automática de resumos da informação multimédia tem vindo assim a ser uma das formas adoptadas para resolver o problema da elevada quantidade de dados a recuperar. A plataforma CABL, em particular, tem como principal objectivo criar um serviço, e aplicações associadas, para fornecimento de conteúdos audiovisuais em língua portuguesa sobre banda larga. Este projecto inclui uma aplicação de gestão que, após a sua conclusão, permitirá a geração semi-automática de resumos audiovisuais com base em categorias semânticas e perfis de utilizadores. Date: 19-May-2006    Time: 15:30:00    Location: 336 Finding good unsatisfiable sub-clause-sets Oliver Kullmann University of Wales Swansea Abstract—I want to present some joint work with Joao Marques-Silva and Ines Lynce on the problem of finding "good" unsatisfiable sub-clause-sets of some given unsatisfiable clause-set. This problem has applications in consistency checking of order specifications as well as in model checking. Our approach is based on a fine-grained analysis of the clause-set, exploiting proof-theoretical and semantical properties. Especially the analysis of minimally unsatisfiable sub-clause-sets and generalisations to "lean" clause-sets (i.e., "autarky-free" clause-sets) will play a role here. Date: 16-May-2006    Time: 14:00:00    Location: 217 MODELAÇÃO DE CONTEXTO EM ENGENHARIA ORGANIZACIONAL” Ana Rita Silva Marques Amado Fernandes Departamento de Engenharia Informática Abstract—Apesar das muitas iniciativas e sistemas informáticos criados com o objectivo de suportar os processos de negócio, as pessoas ainda investem muito tempo do seu trabalho na selecção e obtenção da informação necessária ao desempenho das suas actividades dentro da organização. No que respeita ao trabalho colaborativo, também encontramos inúmeros sistemas que fornecem apenas um apoio parcial aos actores do negócio visto estarem focalizados num determinado objectivo e não tomarem em consideração as capacidades multi-tarefa dos indivíduos. Do ponto de vista operacional, para suportar apropriadamente as tarefas intelectuais na organização, a informação requerida deve ser fornecida de forma pró-activa e oportuna, segundo os padrões humanos de processamento da informação. A incorporação desses padrões em tecnologias e ferramentas requer uma perspectiva diferente da organização e de novos conceitos organizacionais. Os actores envolvidos nos processos de negócio, especialmente os actores humanos, são entidades complexas capazes de exibir múltiplos comportamentos segundo a tarefa e o papel desempenhado na execução da mesma. A presente investigação concretiza o conceito de “Contexto de Interacção” apresentando-o como o elemento chave para modelar os actores e as suas interacções que ocorrem durante a execução das tarefas que compõem os processos de negócio. Para tal, foi definida e aplicado um método apropriado à natureza qualitativa da informação, ilustrando os conceitos referidos com um caso prático efectuado num ambiente organizacional real. Palavras-chave: Modelação Organizacional; Modelação de Processos de Negócio e Actores de Negócio; Modelação baseada em Papéis; Contexto de acção e Contexto de Interacção; Trabalho Colaborativo; Teoria de Speech Acts. Date: 05-May-2006    Time: 15:00:00    Location: ANFITEATRO PA-3 DO EDÍFICIO DE PÓS-GRADUAÇÃO DO IST Predicting transient error rates due to radiation for processor-based digital architectures Dr. Raóul Velazco Lab, Intitut National Polytechnique de Grenoble Abstract— Microelectronic circuits operating in radiation environments can be affected by the so-called Single Event Upsets (S.E.U.) phenomenon. SEUs, also referred as “upsets”, “soft errors” or “bit flips”, are mainly responsible for transient (non destructive) changes in the information stored in memory cells within integrated circuits. The cause of SEUs is the creation of a spurious current pulse in sensitive areas of the circuit. This current pulse appears as the consequence of the ionization produced from the interaction of energetic particles with the silicon substrate. Since the last 20 years SEUs have been a major concern for space applications due to the presence of charged particles (heavy ions, protons) in space environment. The constant improvements accomplished by the microelectronics manufacturing technology, make the today’s integrated circuits operating in the Earth’s atmosphere potentially sensitive to SEUs. Indeed, upsets observed in aircraft’s equipment and even in systems operating at ground level, have been explained by the interaction of neutrons present in the atmosphere. Notice that in this case, the incident particle has no charge, but the ionization is provoked by the daughter particles resulting from the intereaction between the neutron and atoms present in the silicon substrate. Perturbations provoked by Single Event Upsets (SEUs) increase with the reduction of transistor\\\'s features. In this talk it will be presented a strategy allowing estimating SEU error-rates based on a limited radiation ground testing (performed in particle accelerators) and fault injection results. A flexible and versatile test platform, well suited to implement such a strategy will be described. Experimental results obtained for different processors will illustrate the accuracy of error rate predictions resulting from the use of the proposed error-rate prediction strategy. Date: 05-May-2006    Time: 11:00:00    Location: 336 Adaptive Main Memory Compression Thomas Gross ETH Abstract—Title: Adaptive Main Memory Compression

Irina Chihaia Tuduce and Thomas Gross
Departement Informatik
ETH Zurich

Applications that use large data sets frequently exhibit poor performance because the size of their working set exceeds the available physical memory. As a result, these applications suffer from excess page faults and ultimately exhibit thrashing behavior. For some applications, compression offers a way to reduce the number of page faults that must be serviced from the disk. We describe here a system that can be implemented with a small number of kernel changes.

The key idea to exploit the benefits of memory compression is to adapt the allocation of real (physical) memory between uncompressed and compressed pages without user involvement. The system manages its resources dynamically on the basis of the varying demands of each application and also on the situational requirements that are data dependent. The technique used to localize page fragments in the compressed area allows the system to reclaim or add space easily if it is advisable to shrink or grow the size of the compressed area.

The design is implemented in Linux, runs on both 32-bit and 64-bit architectures, and has been demonstrated to work in practice under complex workload conditions and memory pressure. The benefits from our approach depend on the relationship between the size of the compressed area, the application's compression ratio, and the access pattern of the application. For a range of benchmarks and applications, the system shows an increase in performance by a factor of 1.3 to 55.

Short CV
Thomas R. Gross is a Professor of Computer Science at ETH Zurich, Switzerland. He is the head of the Computer Systems Institute, from 1999-2004 he was the deputy director of the NCCR on on "Mobile Information and Communication Systems", a research center funded by the Swiss National Science Foundation. He is also an Adjunct Professor in the School of Computer Science at Carnegie Mellon University.

Thomas Gross joined CMU in 1984 after receiving a Ph.D. in Electrical Engineering from Stanford University. In 2000, he became a Full Professor at ETH Zurich. He is interested in tools, techniques, and abstractions for software construction and has worked on many aspects of the design and implementation of programs. To add some realism to his research, he has focussed on compilers for uni-processors and parallel systems and has contributed to many areas of compilation (code generation, optimization, debugging, partitioning of computations, data parallelism and task parallelism). Compilers are also interesting systems that illustrate the use of many concepts to structure programs (frameworks, patterns, components). Compilers require a good cost-model of the target environment (e.g., to make space-time tradeoffs) but recent systems have become so complex that simple models no longer suffice. In his current research, Thomas Gross and his colleagues investigate network- and system-aware programs -- i.e. programs that can adjust their resource demands in response to resource availability.

In addition to working on compilers, Thomas Gross has been involved in several projects that straddle the boundary between applications and compilers. And since many programs are eventually executed on real computers, He has also participated in the past in the development of several machines: the Stanford MIPS processor, the Warp systolic array, and the iWarp parallel systems. His current work in computer systems concentrates on networks. Date: 03-May-2006    Time: 10:00:00    Location: Sala 905 (Sala Omega do POSI) Modelos simples com tempo discreto de circuitos de regulação genética Ricardo Coutinho Instituto Superior Técnico Abstract—Descreve-se a modulação de redes de regulação genética através de sistemas dinâmicos seccionalmente afins com tempo discreto. Apresentam-se os re