Security Seminar at LORIA
Security Seminar at LORIA

A link to an ical file is available, for your digital calendars.
Talks 2019 – 2020
Thursday May 28 2020
Ralf Küsters (Universität Stuttgart)
 Cancelled due to Covid19 
A008, 13:30
 Cancelled due to Covid19 
A008, 13:30
Cancelled due to Covid19.
Thursday March 12 2020
Emmanuel Thomé (Inria Nancy)
Largescale computational records for publickey cryptography.
A008, 13:30
Largescale computational records for publickey cryptography.
A008, 13:30
Joint work with Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic,
Nadia Heninger, Paul Zimmermann.
In December 2019 and February 2020, we completed several record computations related to hard problems in publickey cryptography: we factored a 240digit RSA modulus, we computed discrete logarithms modulo a 240digit prime, and while we were at it, we also factored a 250digit RSA modulus. Such records provide very important data points for the assessment of the computational hardness of the integer factorization (IF) and finite field discrete logarithm (DL) problems. These problems underpin the largest part of publickey cryptography that is currently in use. Previous records date back to 2016 (for DL) and 2009 (for IF).
This talk reports on how these computations were performed, and on how we chose parameters for the Number Field Sieve algorithm in order to minimize the running time. We also give some data on the scalability of our approach, and how it required harnessing a formidable computing power, with thousands of CPU cores from several facilities, working in parallel over several months. We also show that our work goes well beyond improving on previous records by using a lot of hardware: even if we had used identical hardware, our larger computation would have been faster than the smaller, previous record (for DL).
In December 2019 and February 2020, we completed several record computations related to hard problems in publickey cryptography: we factored a 240digit RSA modulus, we computed discrete logarithms modulo a 240digit prime, and while we were at it, we also factored a 250digit RSA modulus. Such records provide very important data points for the assessment of the computational hardness of the integer factorization (IF) and finite field discrete logarithm (DL) problems. These problems underpin the largest part of publickey cryptography that is currently in use. Previous records date back to 2016 (for DL) and 2009 (for IF).
This talk reports on how these computations were performed, and on how we chose parameters for the Number Field Sieve algorithm in order to minimize the running time. We also give some data on the scalability of our approach, and how it required harnessing a formidable computing power, with thousands of CPU cores from several facilities, working in parallel over several months. We also show that our work goes well beyond improving on previous records by using a lot of hardware: even if we had used identical hardware, our larger computation would have been faster than the smaller, previous record (for DL).
Thursday November 28 2019
Ralf Sasse (ETH Zurich)
Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures (joint work with Dennis Jackson, Cas Cremers, Katriel CohnGordon)
A008, 13:30
Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures (joint work with Dennis Jackson, Cas Cremers, Katriel CohnGordon)
A008, 13:30
The standard definition of security for digital
signatures  existential unforgeability  does not ensure certain
properties that protocol designers might expect. For example, in
many modern signature schemes, one signature may verify against
multiple distinct public keys. It is left to protocol designers
to ensure that the absence of these properties does not lead to
attacks.
Modern automated protocol analysis tools are able to provably exclude large classes of attacks on complex realworld protocols such as TLS 1.3 and 5G. However, their abstraction of signatures (implicitly) assumes much more than existential unforgeability, thereby missing several classes of practical attacks.
We give a hierarchy of new formal models for signature schemes that captures these subtleties, and thereby allows us to analyse (often unexpected) behaviours of realworld protocols that were previously out of reach of symbolic analysis. We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP's WSSecurity, both protocols which were previously proven secure in traditional symbolic models.
Modern automated protocol analysis tools are able to provably exclude large classes of attacks on complex realworld protocols such as TLS 1.3 and 5G. However, their abstraction of signatures (implicitly) assumes much more than existential unforgeability, thereby missing several classes of practical attacks.
We give a hierarchy of new formal models for signature schemes that captures these subtleties, and thereby allows us to analyse (often unexpected) behaviours of realworld protocols that were previously out of reach of symbolic analysis. We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP's WSSecurity, both protocols which were previously proven secure in traditional symbolic models.
Thursday December 5 2019
Emmanuelle Anceaume (IRISA Rennes)
Abstractions for permissionless distributed systems
A008, 13:30
Abstractions for permissionless distributed systems
A008, 13:30
Permissionless distributed systems are distributed systems in
which (i) the number of participants for carrying out the protocol is
not known before hand, and is not even known during the course of the
execution, (ii) the right to contribute (e.g., by proposing a new value
of interest for the other participants) and to participate (e.g., by
reading new decided values) is not controlled by a (trustworthy) third
authority, and (iii) participants communicate over a weakly connected
communication topology (such as a peertopeer network). The
permissionless blockchain technology exactly complies with such
features. During this presentation I will present some of the most
important abstractions that allow to support the construction of such
distributed systems.
Thursday February 6 2020
Adi Shamir (Weizmann Institute of Science, Israel)
The Insecurity of Machine Learning: Problems and Solutions (replay Esorics 2019)
Amphi Gilles Kahn, 13:30
The Insecurity of Machine Learning: Problems and Solutions (replay Esorics 2019)
Amphi Gilles Kahn, 13:30
This seminar is a replay of the Keynote talk of Adi Shamir given at
Esorics 2019.
We will display the recorded video with the kind permission of Adi
Shamir. Note however that the video will NOT be distributed later on.
The development of deep neural networks in the last decade had revolutionized machine learning and led to major improvements in the precision with which we can perform many computational tasks. However, the discovery five years ago of adversarial examples in which tiny changes in the input can fool well trained neural networks makes it difficult to trust such results when the input can be manipulated by an adversary.
This problem has many applications and implications in object recognition, autonomous driving, cyber security, etc, but it is still far from being understood. In particular, there had been no convincing explanations why such adversarial examples exist, and which parameters determine the number of input coordinates one has to change in order to mislead the network.
In this talk I will describe a simple mathematical framework which enables us to think about this problem from a fresh perspective, turning the existence of adversarial examples in deep neural networks from a baffling phenomenon into an unavoidable consequence of the geometry of R^n under the Hamming distance, which can be quantitatively analyzed.
The development of deep neural networks in the last decade had revolutionized machine learning and led to major improvements in the precision with which we can perform many computational tasks. However, the discovery five years ago of adversarial examples in which tiny changes in the input can fool well trained neural networks makes it difficult to trust such results when the input can be manipulated by an adversary.
This problem has many applications and implications in object recognition, autonomous driving, cyber security, etc, but it is still far from being understood. In particular, there had been no convincing explanations why such adversarial examples exist, and which parameters determine the number of input coordinates one has to change in order to mislead the network.
In this talk I will describe a simple mathematical framework which enables us to think about this problem from a fresh perspective, turning the existence of adversarial examples in deep neural networks from a baffling phenomenon into an unavoidable consequence of the geometry of R^n under the Hamming distance, which can be quantitatively analyzed.
© 2013  2019 Pierrick Gaudry, Marion Videau and Emmanuel Thomé
; XHTML 1.0 valide,
CSS valide