분류
2026년 2월
작성일
2025.10.10
수정일
2026.01.09
작성자
우타리예바 아쎔
조회수
130

Differentially Private Context-Aware and Data-Centric Mechanism for Privacy-Preserving Deep Learning

As machine and deep learning spread across diverse aspects of our society, the concerns about the privacy of the data are getting stronger, particularly in scenarios where sensitive information could be exposed as a result of various privacy attacks. In this dissertation, we focus on a representative Privacy-Preserving Deep Learning (PPDL) technique, Differential Privacy (DP) , a strong notion of privacy that offers robust, quantifiable guarantees by injecting carefully calibrated random noise. However, DP, as well as other PPDL techniques, has largely relied on "one-size-fits-all" mechanisms , that uniformly protect data. This uniformity creates an unavoidable trade-off problem: applying the same level of protection to all data often results in overprotection of non-sensitive information or, conversely, underprotection of specific, highly sensitive data points. This failure arises from ignoring the inherit contextual variability of sensitive data.

 

This dissertation advances a data-centric perspective built on three principles of context-aware protection: (1) not all parts of a data sample are equally sensitive, therefore privacy should focus on regions of interest (ROI); (2) privacy preferences vary across data owners, requiring personalized mechanisms; and (3) data contains hierarchical levels of sensitivity, necessitating multi-level protection. Guided by these principles, we introduce three novel privacy-preserving mechanisms, each presented in a separate chapter:

 

DP Patch: ROI-based Approach of Privacy-Preserving Image Processing with Robust Classification. Existing differentially private image anonymization and generation methods consider entire image sample as private, introducing the perturbations to entire pixels or it's feature space representations, which leads to poor utility of the image. However, we argue that to minimize privacy and utility trade-off, the perturbations should be introduced only to the sensitive area, which we define as region of interest (ROI). The proposed framework introduces a multi-stage privacy preservation methodology, which implements a dual function of differential privacy image denoising and ROI-based localization of sensitive contents within an image. Subsequently, the identified areas are protected by integrating DP noise in form of patches. This process results in a privacy-preserving image that is of higher visual quality compared to DP images. Furthermore, a novel custom model is introduced to enrich the feature representation by utilizing both newly generated privacy-preserving images and the original differentially private images to mitigate feature data loss, notably by excluding the noisy patch regions. The effectiveness of the proposed method was validated by assessing the quality of the generated privacy-preserving images, and comparing the performance of the custom model against established models. Moreover, the proposed method's robustness is evaluated against model inversion attacks.


A User-Centric Privacy Transformation Framework for Federated Learning. Privacy preservation is challenging, especially in multi-client environments such as Federated Learning (FL), where diverse clients have varying privacy needs and preferences. To address this, we propose a user-centric privacy-preserving framework, allowing dynamic and customizable privacy adaptation. Unlike traditional approaches, our method enables each FL client to define a user-centric profile, specifying sensitive information beyond standardized privacy constraints. This flexibility ensures that privacy measures are aligned with individual standards while maintaining data usability. To further enhance protection, we introduce adversary-aware transformations, which aim to protect the sensitive attribute from both human and machine adversaries. We formulate this as an optimization problem, aiming to find an optimal privacy budget to defend against both adversaries. The proposed method is empirically evaluated to assess its impact on the global model performance and resistance against privacy attacks. Experimental results demonstrate that our approach effectively mitigates privacy risks while preserving model accuracy, ensuring an optimal trade-off between data confidentiality, compliance, and learning efficacy, which is crucial for real-world applications.


Multi-level Data Sensitivity-Aware Efficient Deep Learning Data Protection Method. The need for privacy preservation when handling different sensitivity levels within datasets has become critical. However, current privacy-preserving methods often treat data uniformly, i.e., overlook hierarchical sensitivity structures. As a result, they fail to account for cross-correlations between features with different sensitivity levels. This can potentially lead to unintended sensitive data exposure through indirect inference. To address this challenge, we propose a sensitivity-aware deep learning method for multi-level data protection, which (1) disentangles correlations between sensitive and less sensitive features in the data pre-processing stage, (2) introduces a sensitivity-aware knowledge distillation technique that supports secure and utility-preserving knowledge transfer, and (3) enables customizable privacy controls based on clearance levels. Also, the proposed approach is adaptable to federated learning environments, ensuring scalability across decentralized settings. To the best of our knowledge, this paper is the first work that explores training deep learning models on sensitive and proprietary data that needs multi-level data protection. Experimental results demonstrate that the proposed method effectively balances data utility and privacy by disentangling cross-sensitivity correlations with minimal performance loss. In federated settings, it maintains strong performance with lower computational overhead, highlighting its scalability and favorable privacy-utility trade-off.

 

 

This dissertation introduces three adaptive, data-centric mechanisms that advance Privacy-Preserving Deep Learning by overcoming the limitations of "one-size-fits-all" methods, ensuring strong utility while providing granular, context-aware protection across the spatial, client, and feature dimensions of sensitive data

 

학위연월
2026년 2월
지도교수
최윤호
키워드
Differential Privacy, Privacy-Preserving Deep Learning
소개 웹페이지
https://sites.google.com/view/assemphd/home
첨부파일
첨부파일이(가) 없습니다.
다음글
Toward Reliable and Scalable Multi-Cell LoRaWAN Networks: Probabilistic Modeling and AI-Enabled Solutions
호앙 꾸옥 홍 낫 2025-10-16 18:15:23.323
이전글
Scalable Quantum Annealing Frameworks for Combinatorial Optimization Problem
정선근 2025-10-02 13:12:15.737
RSS 2.0 139
게시물 검색
박사학위논문
번호 제목 작성자 작성일 첨부파일 조회수
139 Enhancing Threat Detection and Response Automation 이스마일 2025.10.20 5 99
138 Code-mixing 환경을 위한 한국어 통합 G2P 시스템 최성기 2025.10.17 0 197
137 고속 컨베이어 환경에서의 생산 공정물 결함 검출을 위한 AI 비전 시스템 김형건 2025.10.17 0 101
136 Toward Reliable and Scalable Multi-Cell LoRaWAN Ne 호앙 꾸옥 홍 낫 2025.10.16 0 101
135 Differentially Private Context-Aware and Data-Cen 우타리예바 아쎔 2025.10.10 0 130
134 Scalable Quantum Annealing Frameworks for Combinat 정선근 2025.10.02 0 127
133 Comparative Complexity of Neuropeptide and Recepto 류승희 2025.10.01 0 117
132 확산 모델 기반 필기 이미지 생성에 관한 연구 홍동진 2025.04.10 0 199
131 연합학습 기반 그래프 신경망을 활용한 전기차 충전소 최적 선택 기법 류준우 2025.04.09 0 186
130 Exploring Quantum Approach Applied to Cryptanalysi 와다니 리니 위스누 2025.04.08 0 210
129 Towards computation - communication efficient and 응우옌 민 두옹 2025.04.08 0 165
128 Hybrid Quantum Residual Neural Networks for Classi 노대일 2025.04.08 0 177
127 Distributed Resource Management for Massive IoT Ne 응우옌 쑤언 둥 2025.04.08 0 145
126 A Framework for Leveraging Large Language Models i 데리 프라타마 2025.04.07 0 190
125 Discovery and Authentication of Marker Genes Using 프라타마 리안 다니스 아디 2025.04.07 0 210
124 산업 환경의 IEEE 802.15.4 TSCH 기반 네트워크에서 트래픽 처리량 향상을 위한 이희준 2025.04.07 0 193
123 Uncertainty-Based Hybrid Deep Learning Approach fo 멘가라 악셀 기드온 2024.12.10 0 231
122 Effective Deep Learning Primitives Design for Bina 황선진 2024.10.14 0 215
121 Toward Immersive Multiview Video Streaming through 탄중 디온 2024.10.14 0 182
120 A Low-cost Deep Learning Model for Real-time Low L 등 제강 2024.10.10 0 230