An experimental study on synthetic face image detection is presented. We introduce FF5, a dataset of five fake face generators, including recent diffusion models. A baseline model trained on a specific generator achieves near-perfect accuracy in distinguishing synthetic from real images and handles common distortions (e.g., compression) via data augmentation. Additionally, partial manipulations, where synthetic content is blended into real images, can be detected and localized using a YOLO-based model. However, the model is vulnerable to adversarial attacks and fails to generalize to unseen generators -- a limitation shared by state-of-the-art methods. Testing on Realistic Vision, a fine-tuned version of Stable Diffusion, confirms these challenges. Our study provides a quantitative evaluation of key properties and empirical evidence that deepfake detectors primarily learn generator fingerprints embedded in the signal.