BACHECA TESI
Edizione 2021
Miriam Boscarato (Scuola di management ed economia - Dipartimento di ESOMAS - UNITO)
"Risk management nello spazio cibernetico e la relativa sicurezza nel settore sanitario: impatto economico-sociale"
Miriam Boscarato (Scuola di management ed economia - Dipartimento di ESOMAS - UNITO)
"Risk management nello spazio cibernetico e la relativa sicurezza nel settore sanitario: impatto economico-sociale"
(Relatore: Dott. Emanuele Davide Ruffino).
La tesi che presento unisce diverse tematiche che, a primo impatto, potrebbero sembrare discordanti tra loro, ma in realtà sono fortemente collegate fino a formare un complesso sistema che influenza l’economia, la società e il progresso tecnologico. Ho deciso di argomentare la tematica in modo più ampio e trasversale, per mettere alla luce quanto sia importante definire un punto di incontro tra tecnologia (in questo caso cibernetica), società e obiettivi del sistema economico. L’economia, infatti, è una scienza sociale e non prescinde dal comportamento e dalle scelte dei consumatori: da un lato, è fondamentale comprendere che siamo noi a guidare, in parte, il sistema e a permettere o meno la diffusione della "cultura del rischio"; dall'altro lato, invece, per poterci fidare delle innovazioni abbiamo bisogno di qualche strumento di tutela da parte della tecnologia stessa e della giurisdizione. Per comprendere l'importanza dell'implementazione dei sistemi di sicurezza cibernetica, vi è l'esempio della pandemia che abbiamo vissuto (e stiamo ancora vivendo): un evento emergenziale di tale portata ha sicuramente messo in luce le debolezze del sistema e la forza della criminalità informatica, la quale non si ferma nemmeno davanti ad una situazione così imponente. Al giorno d'oggi, quindi, tutto gira attorno al progresso tecnologico e, anche in ambiti che potrebbero sembrare lontani da tale realtà, come quello sanitario, è fondamentale considerare l'aspetto relativo alla sicurezza delle reti, in quanto un bit sbagliato al posto sbagliato può essere fatale per l’esito di una, o diverse, prestazione di cura.
Luca Campa (Università degli Studi di Udine)
"Sicurezza in reti OT: analisi di una situazione industriale"
Luca Campa (Università degli Studi di Udine)
"Sicurezza in reti OT: analisi di una situazione industriale"
(Relatore: Marino Miculan).
L’utilizzo di sistemi informatici all’interno di impianti industriali ha permesso da un lato di ottimizzare processi produttivi e decisionali, dall’altro ha posto le aziende a rischio di attacchi da parte di malintenzionati che potrebbero provocare danni a cose e persone. Per evitare questi rischi è necessario analizzare le vulnerabilità presenti all’interno dei sistemi adottati e i possibili comportamenti degli attaccanti. Occorre, infatti, predisporre delle contromisure che possano neutralizzare o ridurre al minimo gli effetti derivanti da un eventuale attacco. L’unione del mondo IT (Information Technology) con quello OT (Operational Technology) è uno degli aspetti cruciali da tenere in considerazione durante la trattazione, poiché si è rivelato essere il principale punto di ingresso degli attacchi informatici nel mondo industriale. Come caso studio, si è preso in considerazione il famoso attacco alla Centrale Nucleare di Natanz (IRAN), meglio conosciuto con il nome del software atto a sabotarne l'impianto: Stuxnet. Analizzando i primi livelli di una tipica rete industriale (secondo lo standard ANSI/ISA-95), vengono proposte diverse soluzioni, la più ottimale delle quali ha portato all'implementazione di un Intrusion Detection System Ibrido, con architettura MAPE-K, preposto alla difesa dell'Integrità delle informazioni nelle reti OT. Senza perdita di generalità, la tesi ha preso come riferimento l'ecosistema Siemens (in particolare, i dispositivi PLC S7-1500 e la rete Profinet), tra i più diffusi in questo contesto. Le due componenti dell’IDS sviluppato si occupano di controllare livelli diversi all’interno della tipica rete industriale di cui sopra. In particolare, l'Host-Based IDS si occupa di controllare l'integrità del software eseguito all'interno del PLC, mentre il Network-Based IDS si occupa di intercettare, controllare e validare, segnalando eventuali errori, la comunicazione tra PLC e I/O-Devices. Un notevole risultato collaterale, inizialmente non pianificato, è stata l'implementazione di un sistema di debugging per il linguaggio Ladder Diagram, usato per la programmazione dei PLC.
Sara Concas (Dipartimento di Ingegneria Elettrica ed Elettronica, Università degli Studi di Cagliari)
"Deepfake detection using quality measures"
Link esterno al gruppo di ricerca: https://pralab.diee.unica.it/it
Sara Concas (Dipartimento di Ingegneria Elettrica ed Elettronica, Università degli Studi di Cagliari)
"Deepfake detection using quality measures"
(Relatore: Gian Luca Marcialis).
The term “DeepFake”, usually refers to a video where the face of a person has been replaced with another one or its expression has been changed following that of a source face, typically using deep learning-based techniques.
There are several positive applications of deepfakes, such as video dubbing of foreign movies, virtually trying on clothes while shopping, reanimation of historical figures for educational purposes.
The malicious applications, however, overcome the positive ones: the most alarming are the creation of fake news, hoaxes, financial fraud and unethical use such as the transposition of celebrity faces into porn videos. The name “deepfakes” itself was originated by a Reddit user who, in 2017, used deep learning to swap faces of celebrities into pornographic videos and posted them online. The distribution of deepfakes is becoming particularly alarming among kids and teenagers, harmless victims of online harassment, revenge porn and acts of intimidation. Very often, the minor is ashamed of revealing the abuse, risking to suffer serious damages to his/her psyche.
There exist several types of deepfakes: face swap (when the entire face of a person is replaced with that of another person), reenactment (when the expression, mouth, gaze, pose or body of the target identity is driven by that of a source), editing (when one or more attributes of the target person are added, altered, or removed), synthesis (the fake is created using human face and body synthesis techniques).
The quality of deepfakes is increasing exponentially, so much that very often is impossible to notice the difference between a fake and a real video. Some deepfakes, however, present artifacts; we hypothesize that there exists a correlation between the presence of manipulations and visible or non-visible artifacts.
The goal of this thesis is to propose a new deepfake detection method that exploits the presence of those visual artifacts on manipulated video sequences using three types of quality measures: BRISQUE (that quantifies the “naturalness” of an image), Fast Fourier Transform (evaluating the number of high frequencies) and the Laplacian operator (determining the level of sharpness of the image). These three quality measures are computed on the mouth and eyes regions over 300 frames for each video (about 10 seconds) and the obtained feature matrix is used to train a Convolutional Neural Network and perform classification. The experiments have been carried out on the FaceForensics++ dataset, using two experimental protocols: an intra-dataset protocol (where the system is trained and tested on the same typologies of attack) and a cross-dataset protocol (where the system is tested on never-seen-before typologies of attack).
Link esterno al gruppo di ricerca: https://pralab.diee.unica.it/it
Veronica Cristiano (Dipartimento di Matematica presso Università di Trento)
"Key Management for Cryptographic Enforcement of Access Control Policies in the Cloud - The CryptoAC use case"
(Relatore: Prof. Silvio Ranise).
Nowadays, more and more organizations are using the Cloud to store sensitive data and share resources among employees.
The most important task of an organization using the Cloud is to ensure data sharing in a controlled manner while achieving confidentiality of the stored data, to prevent malicious attacks and data breaches due to an Honest but Curious Cloud Service Provider.
To securely manage sensitive data in that context, Cryptographic Access Control (CAC) is the natural solution. Indeed, CAC schemes combine Cryptography, to achieve confidentiality of the stored data using encryption, and Access Control techniques to regulate the access to that data.
Recently, some cryptographic schemes have been developed to support Access Control on the (Hon-
est but Curious) Cloud; one of the most expressive is Attribute Based Encyption (ABE).
However, although there are many research papers on these topics, most of them propose abstract schemes that are not suitable for a realistic use case. Indeed, most of them do not consider important concrete aspects that directly affect the efficiency and the security of the solution, such as cryptographic key management.
In this thesis we focus on the proper management of cryptographic keys, a necessary condition for the reliability of CAC techniques and too often the Achilles’ heel of organization data security.
Since the security of a cryptosystem relies on the secrecy of the cryptographic keys (Kerckhoffs’s principle), ensuring a secure cryptographic key management is one of the most crucial issues to cope with when designing a secure system.
What companies do to protect cryptographic keys is implementing a Key Management System.
Unfortunately, most of the documentations deal with key management techniques without giving a general overview. On the one hand the literature offers only abstract approaches, giving a set of requirements and a list of cryptographic standards without delving into the practicality of some solutions. On the other hand, the documentations of large companies, such as those of AWS, IBM and Google Cloud, offer their detailed solutions without explaining their choice or providing any comparison with other solutions.
In this thesis, we identify a set of best practices for securely managing cryptographic keys and we make them actionable for easy integration into realistic CAC schemes.
Starting from the guidelines of standardization bodies such as NIST, OWASP, ENISA, we identify the key life cycle phases that mostly affect the security of the key - such as, i.e., key generation phase, key storing phase, key rotation phase and key expiration - providing a set of possible solutions to manage each phase.
For each solution, we emphatise the advantages and disadvantages depending on the specific scenario, paying attention on different aspects such as portability, monetary cost, execution speed and security level.
Then, we focus on cryptographic enforcement of AC policies in the Cloud, and we identify which cryptographic primitives and ciphers are suitable in that context by providing an analysis of them and a comparison of the performances in terms of speed and memory usage.
Finally, in order to validate the work, we apply the analysis to a concrete use case, a CAC scheme developed in the Security & Trust Research Unit of the Cybersecurity Center of FBK (Fondazione Bruno Kessler).
The result is the design of a framework for CAC schemes consisting of a set of rules and hints that guide the user to choose the most suitable key management solution depending on the specific scenario.
Link esterno alla tesi: http://www5.unitn.it/Biblioteca/it/Web/Tesi
Link esterno al gruppo di ricerca: https://www.fbk.eu/it/cybersecurity/
Andrea Flamini (Dipartimento di Matematica / Università degli Studi di Trento)
"A Byzantine Fault Tolerant Protocol For Parallel Time-Stamping"
(Relatore: Prof. Silvio Ranise).
Nowadays, one of the main problems blockchain platforms face is the lack of scalability, i.e. the ability of a platform to grow and handle an increasing number of users and transaction requests. A promising approach used to solve this problem is the technique called sharding, which consists in braking the blockchain into small parts managed in parallel by subsets of the network of nodes, called shards. However, the advantages of working in parallel to the maintenance of the ledger come at the cost of requiring a reconciliation mechanism to reliably interconnect data recorded in different shards. To do this, some consensus protocols instruct the nodes to agree on which blocks have been legitimately created by each shard, and this is the problem that has given rise to this M.Sc. Thesis: the design of a novel consensus protocol, referred to as DTSL Protocol.
At a higher level, the problem that DTSL Protocol solves is "how to allow a wide network of nodes to reach consensus on a vector of time-stamps of a predetermined set of events that are expected to happen in a given time interval" (e.g. the events could be the creation of blocks performed by the involved shards). Each node of the network observes the events from its own perspective, and constructs an initial vector of time-stamps by recording in each component the relevant information about the corresponding event (e.g. the digest of the block created by a given shard). Then, they run an instance of DTSL protocol and this brings them to agreement on an output vector which can be recorded on a public distributed ledger.
The problem above is discussed by considering the presence of malicious nodes in the network, and this leads to the identification of two properties that a consensus protocol must satisfy in order to maximise the amount of meaningful data agreed upon. First, the consensus protocol must be leaderless, which means that there is no single node proposing a protocol output and the other nodes decide whether to accept it or not, but rather several nodes must collectively construct the output vector. Secondly, the consensus process must be carried out in parallel and independently on each vector component, so that disagreement on a single component does not affect the achievement of consensus on the others.
In order to satisfy these properties, we renew two protocols that are the building blocks of the leader-based consensus protocol Algorand designed by Chen and Micali [6]: the first is Graded Consensus (GC), presented by Micali and Feldman in [7], and the second is Binary Byzantine Agreement (BBA), a probabilistic protocol presented by Micali in [12] which is a loop of three steps repeated until consensus is reached on a single bit. Our renewal consists in:
a) the removal of the leader selection and block proposal in favour of an observation phase in which the nodes locally store their initial vectors;
b) the extension of GC and BBA to the multidimensional case, to allow parallel consensus achievement on each component of the time-stamps vector.
We design a third sub-protocol, named Termination Steps (TS), which is combined with the multidimensional GC and BBA, and enriches DTSL Protocol by ensuring that, within a fixed number of steps (hence within a bounded amount of time), the protocol terminates and provides the nodes with a shared output vector.
Finally, DTSL Protocol can be efficiently adopted by large networks (even with millions of nodes), thanks to a random lottery mechanism that selects, at each protocol step, an unpredictable and sufficiently small subset of nodes that execute the prescribed instructions. As a result, this feature allows the adoption of DTSL Protocol by public blockchain networks.
The Thesis contains a detailed description of DTLS Protocol and great efforts have been put into formally proving its security properties under realistic network and communication assumptions.
(Look at the Thesis Bibliography for citations)
Link esterno alla tesi: http://www5.unitn.it/Biblioteca/it/Web/Tesi
Lorenzo Invidia (Dip. Ingegneria dell'informazione, informatica e statistica. Sapienza Università di Roma)
"Rope: A New Flavor of Distributed Malware Execution"
(Relatore: Daniele Cono D'Elia).
In the last few years, the spread of new heuristics in endpoint security products is leading attackers and security researchers to shift their focus to distributed malware execution as a means to avoid detection.
This thesis advances academic research in offensive technology by presenting Rope, a novel distributed malware execution paradigm, intending to raise awareness on new attack surfaces that adversaries might exploit.
The Rope design develops a distributed execution paradigm where processes can access a covert data and memory representation, and carry on the intended semantics through code reuse.
We leverage the transactional interface of Windows to isolate suspicious code and hinder the inspection from security solutions. In this manner, executable code can be safely stored and injected in the context of multiple victim processes. The malware pattern is broken down into chunks, in the form of tiny chains assembled by code reuse means, concealed in a shared data channel.
To this end, we provide a novel injection technique to quietly map and execute shellcode in the context of each victim. This specially crafted piece is responsible for the orchestration of chunk execution, faithfully reproducing the original pattern of the malware while maintaining a low profile to detectors.
We deploy our concept in the latest builds of Windows by evaluating the validity against both consumer and enterprise-grade security products.
Link esterno alla tesi: https://i.blackhat.com/USA21/Wednesday-Handouts/us-21-Rope-Bypassing-Behavioral-Detection-Of-Malware-With-Distributed-ROP-Driven-Execution-wp.pdf
Giovanni Manca (Facoltà di Ingegneria e Architettura dell'Università degli Studi di Cagliari)
"Understanding Failures of Gradient-based Attacks on Machine Learning"
Giovanni Manca (Facoltà di Ingegneria e Architettura dell'Università degli Studi di Cagliari)
"Understanding Failures of Gradient-based Attacks on Machine Learning"
(Relatore: Battista Biggio).
Every Machine Learning classifier is vulnerable to adversarial examples, namely, inputs intentionally
designed with small feature perturbations that cause the model to make a false
prediction. To craft adversarial examples, attackers typically take benign images and use a
gradient descent algorithm on the model’s loss function, trying to minimize it by modifying
the input and leading to misclassification.
Many defenses have been designed to overcome this vulnerability. It has been demonstrated
that some of these defenses are not robust against attacks, but instead, they cause gradientbased
attacks to fail. These defenses have been broken using adaptive attacks, namely, attacks
specifically designed to target a defense. As a result, guidelines and best practices to improve
adversarial robustness evaluations have been proposed, but the lack of an automatic procedure
for evaluation makes the application of such measures difficult. The goal of this work
is to define a set of quantitative indicators for detecting known failures in the optimization
of gradient-based attacks. These indicators can be used to visualize, debug and improve the
security evaluation of the model under attack.
Mirko Mancini (Luiss Guido Carli)
"Sistema di acquisizione e pubblicazione di immagini personali privacy by design - Scenario di rischio cibernetico e misure di mitigazione."
Mirko Mancini (Luiss Guido Carli)
"Sistema di acquisizione e pubblicazione di immagini personali privacy by design - Scenario di rischio cibernetico e misure di mitigazione."
(Relatore: Paolo Spagnoletti).
Un’azienda multinazionale, specializzata in prodotti fotografici e per videoriprese, considerata la sempre maggiore attenzione del mercato, dei Governi e delle Autorità Privacy nazionali per la tutela delle immagini personali dei propri cittadini, ha intenzione di proporre una killer-applicaton che permetta l’acquisizione di alcuni consensi al trattamento, al tempo dello scatto o dell’acquisizione video.
Allo scopo, la nostra consulting boutique CYBERINNOVERY®, specializzata in servizi di innovazione tecnologica e GRC (Governance, Risk, Compliance), è stata incaricata di condurre uno studio di fattibilità per individuare la soluzione maggiormente praticabile privacy and security by design, per compatibilità di standard tecnici e diffusione di dispositivi riutilizzabili, normativa applicabile, rischio cibernetico associato e misure di trattamento, valutazioni che concorrono ad una significativa riduzione del TTM (Time To Market).
L’HIGH LEVEL DESIGN design della soluzione prevede:
1.la possibilità di esprimere il consenso a tipologie di trattamento di immagini pre-impostate mediante un token in possesso del soggetto ritratto che potrà essere fisico o virtuale, quindi mediante un dispositivo wereable o di un’applicazione per per smartphone;
2.l’acquisizione del consenso, mediante colloquio in Radio Frequenza tra i token, appartenenti agli utenti presenti nel campo di vista dell’inquadratura, e i sensori di immagini - conosciuti come CCD (Charge-Coupled Device) – che potranno essere re-ingegnerizzati ex-novo dal proponente e proposti al commercio su nuovi dispositivi, oppure rendendo compatibile il parco fotocamere esistente, tra cui anche gli smartphone, tramite aggiornamenti firmware e logiche di controllo esterne da collegarsi in Bluethooth o USB-UART;
3.che i consensi espressi dai soggetti ritratti (riconoscibili) saranno memorizzati nei metadati del file multimediale e firmati digitalmente dal produttore, ex art. 24, D. Lgs. n. 82/2005, o mediante firme elettroniche avanzate e/o qualificate, ex artt. 26 e 29 del Reg. UE 2014/910 per cui in assenza di consenso il volto dell’utente sarà oscurato.
4.che la divulgazione e pubblicazione, quindi il trattamento del contenuto multimediale mediante strumenti di condivisione (social network o instant messaging) è subordinato alla presenza e validità nei metadati di consenso per i quali le piattaforme multimediali dovrebbero implementare nuovi controlli in esito a specifici provvedimenti. Nel caso di ripresa di un soggetto minore, quale scelta di implementazione potrà essere incluso peraltro un livello di consenso predefinito atto ad impedire la ripresa e diffusione di dati personali riguardanti i minori di 16 anni, come stabilito dall’art. 8 del GDPR, oppure in ragione della normativa nazionale vigente (in Italia, che abbiano compiuto almeno 14 anni, in ossequio all’art. 2-quinquies, D. Lgs. n. 196/2003).
Tuttavia, conseguire una migliore possibilità di confinamento informativo della digital-persona con l’approccio suggerito, consta comunque in una data driven application: da ciò discende la necessità di scelte di confinamento informativo, di poteggersi da scenari di data-breach con crittografia, anonimizzazione, da hacking dei dispositivi e da Denial Of Service per inibizione della verifica delle transazioni generate.
Estensioni del concept sono cosituite dalle digital fences – installazioni di token stanziali - per la limitazione di acquisizione immagini di siti sensibili con droni commerciali e dagli smart-city token per l’acquisizione di consenso per servizi a valore aggiunto.
Damiano Melotti (Dipartimento di Ingegneria e Scienze dell'Informazione, Università di Trento)
"Reversing and Fuzzing the Google Titan M Chip"
(Relatore: Andrea Continella).
Google recently introduced a secure chip called Titan M in its Pixel smartphones, enabling the implementation of a Trusted Execution Environment (TEE) in Tamper Resistant Hardware, mitigating hardware-level exploits, and providing several security sensitive backends. Such a trend, proposed also by other OEMs, will become increasingly popular in the near future of Android security.
TEEs have been proven effective in reducing the attack surface exposed by smartphones, by protecting specific security-sensitive operations. However, studies have shown that TEE code and execution can also be targeted and exploited by attackers, therefore, studying their security lays the basis of the trust we have in their features.
In this paper, we provide the first security analysis of Titan M. First, we reverse engineer the firmware and we review the open source code in the Android OS that is responsible for the communication with the chip. By exploiting a known vulnerability, we then dynamically examine the memory layout and the internals of the chip. Finally, leveraging the acquired knowledge, we design and implement a structure-aware black-box fuzzer.
Using our fuzzer, we rediscover several known vulnerabilities after a few seconds of testing, proving the effectiveness of our solution. In addition, we identify and report a new vulnerability in the latest version of the firmware.
Link esterno alla tesi: https://github.com/quarkslab/titanm/blob/master/ROOTS/DamianoMelotti_ReversingAndFuzzingTheGoogleTitanMChip_paper.pdf
Link esterno al gruppo di ricerca: https://www.utwente.nl/en/eemcs/scs/
Martina Noventa (Università degli Studi di Ferrara)
"Il Trojan horse: l'assenza del diritto dinanzi agli strumenti investigativi tecnologicamente avanzati"
Martina Noventa (Università degli Studi di Ferrara)
"Il Trojan horse: l'assenza del diritto dinanzi agli strumenti investigativi tecnologicamente avanzati"
(Relatore: Francesco Morelli).
Abstract
L’avvento tecnologico e il successivo - nonché continuativo - progresso informatico, ha creato nel tempo nuove sfide per il diritto, il quale tutt’oggi si trova impreparato nella soddisfazione delle garanzie che sul piano teorico vengono riconosciute ed espressamente sancite, salvo poi facilmente perdersi nella pratica dell’attività investigativa. L’obiettivo di questo studio è quello di mettere in luce l’antinomia delle esigenze in gioco che si possono raggruppare in due principali linee di pensiero: il primo sorretto dalla giurisprudenza che, seppur oscillante, tende a legittimare l’utilizzo, da parte dell’autorità pubblica, di strumenti e programmi informatici malevoli al fine di osservare, acquisire e copiare dati e informazioni contenuti nel dispositivo oggetto di ricerca; dall’altro lato, il pensiero sostenuto dalla dottrina volto a valorizzare maggiormente la supremazia dei diritti fondamentali del soggetto che inevitabilmente vengono compressi attraverso l’uso di tali strumenti, grazie anche alla mancanza di un quadro legislativo che espressamente stabilisca dei limiti a garanzia dei diritti umani e della dignità del singolo, sia esso l’indagato o un terzo.
Dopo un primo capitolo introduttivo circa l’unica funzione ad oggi disciplinata dalla legge per quanto concerne il captatore informatico, ovvero quella limitata alle intercettazioni ambientali, l’analisi verte maggiormente ad individuare i molteplici ed ulteriori ambiti di azione che possono essere propri di tale strumento tecnologico, talvolta paragonandolo a strumenti investigativi tradizionali presenti nell’ordinamento italiano e talvolta relazionandolo ad istituti del processo penale che mal si conciliano con una simile attività investigativa.
L’approfondimento così svolto porterà il lettore a chiedersi se, sul piano giudico, gli strumenti e programmi informatici utilizzati, siano da considerarsi come prove atipiche all’interno del processo penale oppure incostituzionali, tenuto conto dell’assenza di limiti giuridici nella restrizione della libertà individuale.
Al termine del presente elaborato vi è l’auspicio, rivolto al legislatore, affinché esso prenda finalmente coscienza e consapevolezza dell’enorme potenzialità invasiva dei virus informatici.
Andrea Olla (Università degli studi di Cagliari)
"Penetration test of an IoT automotive device"
(Relatore: Giorgio Giacinto).
Abstract:
My thesis work was carried out during an internship at the computer security company Abissi srl. During the internship, I had the opportunity to work on various tasks. I decided to bring as a thesis only one of those for two main reasons: the first is that this is the task in which I have most increased my knowledge and most strengthened my skills. The second reason is that it is important to increase the amount of scientific literature on the issue of security of IoT systems, given the lack of information sources on the subject, and the fact that the security of these is becoming more important every day.
The core of the thesis is the summary of the penetration test of an IoT system developed by a company that produces systems designed to increase safety in vehicles for transporting both goods and people.
The penetration test carried out falls within the black-box category. Due to the non-disclosure agreement concluded with the company that produces this system, I could not disclose either the name of the latter or the name of the devices analyzed. For the same reason, I am forced to omit some details that can be used to re-identify the manufacturing company.
The research I have been involved in is the classic penetration test of an IoT device. The analysis I carried out was intended to find the vulnerabilities of the analyzed device and to inform manufacturers about their risk and possible mitigation techniques.
When I talk about classic penetration tests, I don't mean that the techniques and procedures used are standard, on the contrary, in a classic IoT device penetration test one must deal with technologies that are often different and implemented in different ways, therefore techniques and processes are always different.
The device object of study is part of a system developed for improving the security of vehicles. The system is designed for companies that want to prevent accidents, to monitor the status of their vehicles and to establish the facts after an accident.
The system is composed of different parts: the device analyzed by me, a number of cameras and sensors that can be connected to it, a server that stores and manages all the information, and a web app used to monitor the status of all the vehicles belonging to the same fleet. A camera is pointed to the driver, its aim is to recognize if the driver is going to sleep, using the smartphone, or distracting from driving in other ways. Other cameras film outside the vehicle and their records are saved only when an event occurs. Events are detected using the information that arrive from an accelerometer and from the vehicle. Information about speed is saved too. Records are sent to the backend to be stored. From the web platform an employee can monitor status and location of the vehicles, watch records of past events, and make a voice call to the drivers.
The thesis is organized as follows:
In chapter 1 is discussed the IoT world, in particular what concerns security, and the main differences with the security of IT systems.
Chapter 2 explains in detail a strategy to perform the analysis of an IoT system.
Chapter 3 focuses on listing and describing at a high level which are the most common threats and the best practices that should be used to avoid them.
In chapter 4 is given a set of technical knowledge required to understand the functioning of the tested device and the tools used to perform the test.
In chapter 5 you will find the report of the device analysis.
Link esterno alla tesi: https://drive.google.com/file/d/1cHKID80fl7fW23TmN0hbVUK2pwAxExuE/view
Luca Petrillo (Università degli Studi del Molise - Dipartimento di Bioscienze e Territorio)
"Generazione Automatica di Clusters Tweets su Tematiche di Cybersecurity"
Luca Petrillo (Università degli Studi del Molise - Dipartimento di Bioscienze e Territorio)
"Generazione Automatica di Clusters Tweets su Tematiche di Cybersecurity"
(Relatore: Francesco Mercaldo).
L’utilizzo di dispositivi informatici come computer, smartphone e sistemi IoT e? aumentato in maniera esponenziale negli ultimi dieci anni garantendo in questo modo un miglioramento della vita, facilitando molte operazioni che prima dovevano essere svolte manualmente o che richiedevano una maggiore quantita? di tempo. I dati creati ed elaborati da questi dispositivi sono diventati una risorsa preziosa e qualsiasi settore della nostra societa? si sta uniformando alla fase di digitalizzazione. L’utilizzo di sistemi informatici fa parte del nostro quotidiano. Con tutti i benefici derivanti da questo utilizzo c’e? da considerare anche l’altro lato della medaglia: sono molti i casi di cronaca, quasi giornalieri, che riguardano furti di dati e frodi tramite questi servizi, nella maggior parte dei casi scaturiti dalle vulnerabilità note o sconosciute presenti all’interno di questi sistemi. Twitter e? uno dei social network piu? utilizzati al mondo, e consente di pubblicare e comunicare con le altre persone tramite brevi messaggi di testo, eventualmente con aggiunta di foto e video. Viene utilizzato da enti, istituzioni, politici e persone comuni grazie anche alla comunicazione dell’informazione in tempo reale. Proprio grazie a questi due fattori, giornalmente viene prodotta una mole di dati molto consistente e proprio questi ultimi possono essere sfruttati come risorsa per estrapolare informazioni importanti. Potrebbe risultare utile quindi avere dei sistemi che permettano di capire attraverso la descrizione di un utente esperto o semplicemente di un utilizzatore comune il presentarsi di un’eventuale problematica all’interno del mondo informatico. L’ obiettivo di questa tesi è la progettazione e l’implementazione di una metodologia che estragga conoscenza a partire da brevi messaggi di testo. In particolare siamo interessati ad estrarre informazioni relative a tematiche di sicurezza informatica, attraverso lo studio del testo contenuto in un tweet e la creazione di cluster suddivisi per vulnerabilita?. Infatti, tweet che trattano una stessa vulnerabilita?, anche con l’uso di un linguaggio diverso, possono essere raggruppati insieme per studiare quale sia l’impatto di queste problematiche di cybersecurity sulle persone.
Matteo Rizzi (DISI, Università di Trento)
"TLS Analyzers for Android Apps: State-of-the-art Analysis and Integration in TLSAssistant"
Matteo Rizzi (DISI, Università di Trento)
"TLS Analyzers for Android Apps: State-of-the-art Analysis and Integration in TLSAssistant"
(Relatore: Silvio Ranise).
In a world where interconnection between devices is a prerequisite for many applications, security in communications is critical.
TLS (Transport Layer Security) is the foundation of communications encryption. Born as the successor of SSL (Secure Socket Layer created by Netscape), it allows information to be shared securely in a transmission channel from source to destination (end-to-end).
TLS uses ciphers to make the connection reliable and secure. In some of them, as the technology progresses and computational power increases, new vulnerabilities are discovered.
Once a cipher is vulnerable and an attack is feasible, the cipher is removed from the next version of the protocol and is considered deprecated.
Unfortunately, upgrading the TLS protocol to the latest version could cause compatibility issues.
Many times there is a compromise: use the most compatible version of TLS (for example TLS 1.2, even if it is not the most recent one) and configure it correctly by excluding vulnerable ciphers. To evaluate and manage the quality of a TLS configuration, there are specific tools such as TLSAssistant. TLSAssistant combines state-of-the-art TLS analyzer in the analysis phase and provides mitigations for the user to apply in his own system. What TLSAssistant needed was a modernization of the analyzer used for the vulnerability detection in Android applications.
My contribution was to convert the mallodroid android app analyzer from the now deprecated Python 2 to the new Python 3.
Then, to extend the coverage of the vulnerability detection in the app scenario, I compared various security tools for analyzing Android applications, I chose SUPERAnalyzer: an analyzer written in RUST that allows a decompilation and analysis of the APK file according to a system of regular expressions (regex) called rules. After using a parser that I have written in Python and correctly importing the rules to make SUPER contextual to TLS, I extensively tested its effectiveness on a dataset of 1169 APKs (chosen following the most used apps in the Google Play Store).
To make sure that the integration of the new tools had not changed the already existing functionalities, I performed a regression test, which was successful. I detected a flaw unrelated to the integration of SUPERAnalyzer in the analysis of the SLOTH vulnerability: the first attack vector of SLOTH was partially implemented in TLSAssistant, while the second vector was completely missing. After an accurate simulation of a system vulnerable to SLOTH, I implemented the necessary checks to fix the bug in the first attack vector and completely implemented the check of the second attack vector in TLSAssistant.
Finally, I tested the final version of TLSAssistant on a real use case: an online citizen authentication application using the Italian electronic identity card (CieID App). To mitigate the 3SHAKE and LUCKY13 attacks, it was necessary to upgrade the authentication server to OpenSSL v1.1.1. Updating the authentication server provided a TLS padding error.
By properly analyzing the app I realized not only that the problem was related to the padding used in the digital ID card chip but also that the practical authentication flow (the TLS mutual authentication) between the app and the authentication server was not consistent with the theoretical flow.
After reporting the issue to the Istituto Poligrafico e Zecca dello Stato the app flow was remodeled.
Antonella Gioia Rodio (Dipartimento di Ingegneria dell'informazione, informatica e statistica, Sapienza Università di Roma)
"Poking Hypervisors: A Tale of Evasions and Remediations for Transparent Monitoring"
Antonella Gioia Rodio (Dipartimento di Ingegneria dell'informazione, informatica e statistica, Sapienza Università di Roma)
"Poking Hypervisors: A Tale of Evasions and Remediations for Transparent Monitoring"
(Relatore: Daniele Cono D'Elia).
The advent of virtualization-based monitoring has significantly enhanced malware analysis techniques. A key factor for any security framework that uses virtualization is to monitor the state of a target environment through critical features. This thesis provides a glimpse into the inner workings of hypervisors, which enable the monitoring of a target from a safe vantage point, with higher accuracy than other state-of-the-art malware analysis platforms. Building on a set of features provided by CPU vendors, hypervisors may intercept a set of critical security-related events. Accordingly, a malware analyst may chiefly gather information of deemed importance.
Throughout the thesis, we will notice how hypervisor-based monitoring is comparable to a “cat and mouse” game between miscreants and security analysts. Actually, the linchpin of the thesis is to assess how much an analysis technique can be transparent, i.e., the property of making virtual and native hardware indistinguishable even from a dedicated adversary. Primarily, we glance at how past hypervisor-based approaches addressed different stumbling blocks. Then, having identified a gap between survey literature and a panorama of cutting edge monitoring methodologies, the thesis provides a fine-grained assessment of available, state-of-the-art hypervisorbased security frameworks. Concisely, it distills the inner workings, the primitives, and the limitations exposed by the above-mentioned frameworks, to facilitate the identification of a usage scenario in a security context. Moreover, we also sketch the existing security caveats, advancing potential adversarial patterns.
While the fidelity and non-intrusiveness of analysis frameworks continue to improve, malware writers attempt to poke tell-tale signs that are indicative of monitoring through discrepancies. Ongoing endeavors attempt to clobber this issue, ensuring that eminent characteristics of analysis environments are ousted with realistic ones and that any instrumentation artifacts remain cloaked. In this ongoing arms race, recent researches have demonstrated how side-channel attacks may enable adversaries to detect interception events. Accordingly, the thesis confers patterns, as both the increase in instruction execution time and subsequent cache artifacts, that shatter the illusion of transparency, providing overwhelming evidence of the instrumentation. Unprivileged side-channel attacks in virtualized environments are quickly evolving. The state of affairs is exacerbated by the advent of microarchitectural-based threats, which can circumvent current state-of-the-art defenses.
Furthermore, the thesis showcases a case study against a reference hypervisorbased implementation of API hooking, a technique ordinarily employed by malware analysts to pinpoint possibly malicious features of a program from its externally observable behavior. The case study provides a practical demonstration on how to profitably evade this layer of defense, during potential Red Team operations.
The thesis concludes leaving an open question on how the security research community should react and what direction to head in.
Alessandro Sanna (Dipartimento di Ingegneria Elettrica, Elettronica ed Informatica; Università degli Studi di Cagliari)
"Dynamic analysis and instrumentation of interaction-based Microsoft Office malware"
Link esterno al gruppo di ricerca: https://pralab.diee.unica.it/
Alessandro Sanna (Dipartimento di Ingegneria Elettrica, Elettronica ed Informatica; Università degli Studi di Cagliari)
"Dynamic analysis and instrumentation of interaction-based Microsoft Office malware"
(Relatore: Davide Maiorca, Giorgio Giacinto, Fabrizio Cara).
Microsoft Office has proven since its introduction in 1990 to be a formidable tool to enhance the productivity of its users. This was one of the critical factors that led it to become as omnipresent today. However, this popular toolbox owes its success to the variety of tools available to the user. One of them is the possibility to write custom programs in the form of macros written in Visual Basic Script, a proprietary scripting language owned by Microsoft. Since there are virtually no limits to the contents of said macros, any malicious agent could try to write its custom malware inside an Office document. Once this infectious file is sent to the victim and the victim opens it, the macro is likely to self execute and compromise them. The key objective of this work is to dissect these malicious macros. Operationally, we will avail of two procedures: dynamic analysis and instrumentation. The first will allow us to see what the malware could do if executed in a realistic environment; the second will be vital to understand the malware's contents and logic, which is very likely to be obfuscated. Currently, there are many tools for the analysis of this type of malware, with multiple entries for both the static and the dynamic approaches. However, no tool available at the moment can bypass interaction-based malware automatically. Interaction-based Office macros are programs that require the user to take one or more actions for the execution to continue. Even if we are talking about trivial actions, e.g. clicking a button, this kind of obstacle becomes more obnoxious if our goal is to analyze thousands of files instead of only one dynamically. This work, then, will improve a research tool, Oblivion, with an additional feature that makes the program capable of simulating user interactions, allowing the researcher to automatically analyze a plethora of new files that before could not give them any information.
Link esterno al gruppo di ricerca: https://pralab.diee.unica.it/
Marco Storto (Università degli Studi del Molise)
"Confronto tra Metodi di Apprendimento Automatico Classici e Quantistici per la Malware Detection"
Marco Storto (Università degli Studi del Molise)
"Confronto tra Metodi di Apprendimento Automatico Classici e Quantistici per la Malware Detection"
(Relatore: Francesco Mercaldo).
Android è il sistema operativo per dispositivi mobili più diffuso ed utilizzato al mondo, e di conseguenza anche quello sottoposto a maggiori attacchi da parte di attacker che sviluppano applicazioni malevole allo scopo di carpire informazioni private e sensibili dai dispositivi infetti. Questo, a sua volta, sta spingendo gli ethical hacker e gli sviluppatori di applicazioni Android verso la ricerca di metodi innovativi ed in grado di individuare tali minacce il più velocemente possibile. Inoltre, si cercano metodi in grado di identificare codice malevolo, anche solo leggermente diverso da quello già conosciuto, in quanto molti malware vengono creati modificando e riutilizzando parti di codice di altri malware già classificati. Un metodo molto diffuso per la rilevazione di malware Android è quello dell’utilizzo di tecniche di deep learning, e nello specifico di reti neurali convoluzionali. Inoltre, negli ultimi anni grazie a nuovi lavori emersi in ambito di quantum computing, si sta cercando di migliorare tali sistemi di apprendimento automatico integrando in tali algoritmi delle componenti quantistiche. In questo elaborato sarà mostrato un confronto tra modelli di deep learning tradizionali e ibridi classico-quantistico, tenendo conto delle attuali limitazioni nel campo delle simulazioni quantistiche dovute alla limitata capacità di calcolo degli elaboratori attuali rispetto ad un elaboratore quantistico vero e proprio. Considerando complessivamente i risultati ottenuti durante i vari esperimenti possiamo affermare che, nonostante le limitazioni imposte dalla struttura stessa del modello ibrido classico-quantistico quest’ultimo ha ottenuto comunque dei buoni risultati anche paragonato ad una rete convoluzionale classica che non ha alcun tipo di limitazione in fatto di dimensione delle immagini prese in input. In definitiva, confrontando i dati derivanti dalla fase di test, la rete convoluzionale tradizionale ottiene dei risultati migliori
soprattutto per quanto riguarda la metrica di loss, rivelandosi comunque più affidabile e costante rispetto al modello ibrido.
Ivan Vaccari (CNR-IEIIT)
"Security aspects about Internet of Things networks, devices and communication protocols"
(Relatore: Alessandro Armando, Maurizio Aiello, Enrico Cambiaso).
During these Ph.D years, I investigated security aspects of IoT networks and devices. In particular, for the ZigBee protocol, I optimized the attack that exploited the vulnerability of the XBee module, related to the AT command packets, to obtain more efficient results and I implemented a protection system able to detect and mitigate this innovative attack. Meanwhile, I studied and implemented well-known attacks in order to evaluate the security of ZigBee protocol. Initially, well-known wireless cyber-attacks are investigated and implemented to verify if IoT networks are vulnerable to them. A set of well-known attacks targeting non-IoT systems are selected and performed against a test ZigBee network, in order to evaluate the possibility to effectively target the network. After the initial study, the attacks selected are: Replay, Sniffing, Jamming and Flooding/DoS. Beyond these activities, I continue to investigate the security of IoT modules and networks by focusing on different communication protocols. Given the low computational capacity of the components, some security aspects could be ignored and exploited by a malicious user.
In the second year, I studied the security of IoT networks implemented on WiFi. In particular, I studied the ESP8266 module (an IoT device widely adopted) in order to identify possible vulnerabilities and also to exploit this device to execute cyber-attacks. In particular, the ESP8266 module is vulnerable to a Replay attack and, for this topic, I implemented a protection system able to protect the communication against Replay attack. Also, I implemented three cyber threats by using the ESP8266 module as vector attack. The attacks implemented are: a Wi-Fi Jamming, a Social Engineering and a SlowDoS attack. The Wi-Fi jamming attack creates a high number of fake Wi-Fi to obstruct the selection of the correct Wi-Fi. Social Engineering attack instead creates a fake free Wi-Fi and requires Facebook credentials to log in, when a victim inserts its credentials, the ESP module stores the credentials inside the device. Finally, the SlowDoS attack is implemented and tested against a physical apache2 server.
In the last year, I investigated IoT ad hoc protocol called MQTT in order to identify possible vulnerabilities and implement innovative attacks able to exploit them. For this purpose, I implemented an innovative cyber-attack against MQTT called SlowITe. SlowITe is a novel low-rate denial of service attack aimed to target MQTT through low-rate techniques. SlowITe exploits the weakness that allows the client to configure the behaviour of the server. I also implemented a new version of the tool, called SlowTT, able to exploit other parameters of MQTT and keep connections alive for potential infinite time. Moreover, I worked in the definition of a new dataset based on MQTT, called MQTTset, that simulates smart-space indoor environment to IoT context to implement possible protection systems based on machine learning to detect attacks in real-time. This dataset is adopted to implement a detection system based on machine learning/artificial intelligence in order to detect and mitigate cyber-attacks against MQTT networks and devices. Finally, on MQTT, I exploited the communication protocol to implement a tunneling system to steal sensitive information in a private network with a related machine learning detection system performed with hyperparameters optimization to improve accuracy and statistics metrics.
Link esterno alla tesi: http://hdl.handle.net/11567/1047169
Giacomo Zanolli (Dipartimento di Ingegneria e Scienza dell'Informazione dell'Università degli Studi di Trento)
"FIDO2 Passwordless Authentication: From the basics to an implementation in the context of an authorization system"
Link esterno al gruppo di ricerca: https://st.fbk.eu
Giacomo Zanolli (Dipartimento di Ingegneria e Scienza dell'Informazione dell'Università degli Studi di Trento)
"FIDO2 Passwordless Authentication: From the basics to an implementation in the context of an authorization system"
(Relatore: Prof. Silvio Ranise).
When it comes to the security of our digital systems, authentication often plays a central part, constituting a bedrock upon which entire frameworks build.
Passwords have long been the most widespread form in which we tackle this requirement, but it is well-known how they are subject to a series of issues that can undermine their security. A couple of examples in this regard include poor password choice -e.g., simple or reused passwords- and being vulnerable to social engineering attacks, like phishing.
Over the years, numerous alternatives have emerged to either support passwords or replace them entirely.
This thesis aims at analyzing one of those alternatives, FIDO2.
At its core, FIDO2 differs substantially from password-based authentication by relying on asymmetric cryptography. Furthermore, it shifts the burden of credential management from the end-user to trusted pieces of hardware or software, with considerable gains in terms of security and usability.
FIDO2 can be used both to assist passwords (in a MFA scenario) or to replace them altogether (while still maintaining MFA).
In the proposed work, we first present an overview of the WebAuthn specification: the part of FIDO2 with which most developers will have to interact.
We then proceed to showcase a demonstrative implementation of FIDO2 in three different scenarios:
i. In a simple client-server context, with a client proving its identity to a server.
ii. In the context of an authorization flow of OAuth2.1, with FIDO2 constituting the authentication portion of the flow.
iii. In the context of a native Android application, which represents a different environment compared to the web browser of the first two cases.
Finally, building on the insights we gained from the analysis of existing literature, the theoretical study of the standard, and the experience of its practical implementation, we draw some remarks on the overall advantages and shortcomings of FIDO2 for different real-world use-case scenarios. After highlighting novel and relevant problems, we propose original mitigations to them.
Overall, the work aims to provide a comprehensive overview of FIDO2 specification from both its theoretical and practical standpoint and a reference implementation that can constitute a starting point for future developments.
It is worth to precise that we are concerned with the point of view of the application developer. Therefore, we do not treat the portion related to FIDO's CTAP2 specification.
Link esterno al gruppo di ricerca: https://st.fbk.eu
BACHECA TESI EDIZIONI PASSATE
Edizione 2022
Edizione 2022
Edizione 2021
Edizione 2020
Edizione 2019
Edizione 2018
Edizione 2017
Edizione 2016
Edizione 2015
Edizione 2014
Edizione 2013
Edizione 2012
Edizione 2011
Edizione 2010
Edizione 2009
Edizione 2008
Edizione 2007
Il Premio Tesi è realizzato in collaborazione e con il sostegno di: