In recent months, free deep learning-based software tools have made it easier to create realistic face-swapped videos,commonly known as "DeepFake" (DF) videos. While video manipulation using visual effects has existed for decades, advances indeep learning have drastically increased the realism of fake content and made it much more accessible to create. These AI-generatedvideos, often referred to as DF or AI-synthesized media, are relatively easy to produce using artificial intelligence tools.However, detecting these DFs is a significant challenge. Training algorithms to identify them is not straightforward. To address this,we have developed a system to detect DFs using Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM)networks. The system uses a CNN to extract frame-level features from videos. These features are then passed to an LSTM-basedRecurrent Neural Network (RNN), which analyzes the temporal relationships between frames to classify whether a video has beenmanipulated. The model specifically targets the temporal inconsistencies introduced by DF generation tools.We tested our system on a large dataset of fake videos and achieved competitive results using a simple architecture.Keywords: DeepFake Detection, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-TermMemory (LSTM).JEL Classification Number: I20, C88, O33