1) Embedded Systems and IoT Security
IoT devices have become more prevalent, but unfortunately, they frequently lack the necessary security mechanisms to safeguard themselves from malware attacks. This is often due to cost, size, or power constraints. To address this issue, our project aims to develop lightweight security techniques based on the hardware-software co-design principle. Our main objective is to minimize hardware costs (or use existing low-cost secure hardware, e.g., TrustZone-M TEE) while still providing provable security guarantees, even for low-cost IoT devices. To date, we have successfully developed the following techniques:
- Secure federated applications against poisoning attacks: [arxiv24]
- Remote attestation: [WiSec17, DATE18, HOST18, AsiaCCS18, SEC19, CCS21]
- Software update, reset, and erasure: [EMSOFT18, ICCAD19]
- Proof of execution: [SEC20, DAC22]
- HW/SW verification: [ICCAD19, SEC19, SEC20]
- Control-flow attestation: [SEC23]
- Run-time auditing: [SEC23]
2) Legacy Binary Security
Outdated libraries, misuse of cryptographic primitives, and algorithmic weaknesses are some of the exploitable vulnerabilities that may exist in legacy software. Fixing these vulnerabilities can be challenging and time-consuming, especially when the source code is not available. Therefore, in this project, we aim to propose frameworks that can identify and patch vulnerabilities in legacy binaries with minimal manual intervention required from developers. Our first work successfully identified various outdated cryptographic hash functions present in legacy binaries and proposed a novel approach to patching them. We aim to expand on this first work for identifying and patching different types of vulnerabilities.
3) Personalized Federated Learning
Personalized Federated Learning (PFL) has gained significant traction in recent years as it combines the advantages of collaborative (global) training from classical federated learning with personalization of own data. However, current PFL methods are evaluated on a wide range of datasets, models, and privacy guarantees, making it difficult to compare their performance and applicability in realistic settings. This project aims to develop a framework to evaluate existing PFL methods across settings using realistic models and datasets.