The face: A crucial means of identity. But what if this crucial means of identity is stolen from you? Yes, this is happening and is termed as 'Deep fake.' Deep fake technology is an artificial intelligence based human image blending method used in different ways such as to create revenge porn, fake celebrity pornographic videos, or even in cyber propaganda. Videos are altered using General Adversarial networks in which the face of the speaker is manipulated by a network by tailoring it to someone else's face. These videos can sometimes be identified as fake by human eye; however, as neural networks get rigorously trained on more resources, it will become difficult to identify fake videos. Such videos can cause chaos and bring economical and emotional damages to one's reputation. Videos targeted on politico in form of cyber propaganda can prove to be catastrophic to a country's government.We will discuss about the many tentacles of Deep fake and dreadful damages it can cause. But most importantly, this talk will provide a demo of the proposed solution: to identify complex Deep fake videos using deep learning. This can be achieved using a pre-trained Facenet model. The model can be trained on image data of people of importance or concern. After training, the output of the final layer will be stored in a database. A set of sampled images from a video will be passed through the neural network and the output of the final layer from the neural network will be compared to values stored in the database. The mean squared difference would confirm the authenticity of the video.In 2018, we believe that Deep fake will progress to a different level. We will also talk about defensive measures against Deep fake.