Dark Patterns Buster is a project aimed at detecting and classifying manipulative design elements in web and app interfaces known as "dark patterns." These elements trick users into making decisions that they might not otherwise choose, such as hidden fees, forced continuity, or disguised ads. This project leverages deep learning techniques, reinforcement learning with human feedback (RLHF), and multimodal approaches to identify and flag such patterns.
- Multimodal Analysis: Combines both text and image processing using BART, RoBERTa, and CLIP models to classify UI/UX elements.
- Reinforcement Learning: Fine-tuned with human feedback to improve the accuracy of detecting dark patterns.
- High Accuracy: Achieved 96% accuracy on text and 80% accuracy on image-based dark pattern detection.
- Few-Shot Classification: Uses a few-shot learning approach to enhance performance in limited-data scenarios.
- OCR for Images: Uses object detection and Optical Character Recognition (OCR) to analyze image-based dark patterns in UI/UX elements.
- Models: BART, RoBERTa, CLIP for image classification,YOLO and OCR for UI elements.
- Frameworks: PyTorch, TensorFlow, Hugging Face Transformers.
- Other Tools: Reinforcement Learning with Human Feedback (RLHF), Few-Shot Learning.
- Text Detection: Achieved 96% accuracy in identifying dark patterns in text.
- Image Detection: Achieved 80% accuracy in detecting dark patterns in UI/UX images.
- Expand Dataset: Adding more dark patterns to improve model generalization.
- Improved UI/UX Analysis: : Enhancing the accuracy of image classification through better object detection techniques.