#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Sexual Offences against children have recently come under scrutiny after the decision of the Madras High Court which has ruled that watching and downloading child sexual porn is an inchoate crime. In response, the Supreme Court, on 23 September 2024, ruled that Section 15 of the POCSO and Section 67B of the IT Act penalise any form of use of child pornography, including storing and watching such pornographic content. Along with this, the Supreme Court has further recommended replacing the term “Child Pornography” which it said acts as a misnomer and does not capture the full extent of the crime, with a more inclusive term “Child Sexual Exploitative and Abuse Material” (CESAM). This term would more accurately reflect the reality that these images and videos are not merely pornographic but are records of incidents, where a child has either been sexually exploited and abused or where any abuse of children has been portrayed through any self-generated visual depiction.
Intermediaries cannot claim exemption from Liability U/S 79
Previously, intermediaries claimed safe harbour by only complying with the requirements stipulated under the MOU. As per the decision of the SC, now, an intermediary cannot claim exemption from the liability under Section 79 of the IT Act for any third-party information, data, or communication link made available or hosted by it unless due diligence is conducted by it and compliance is made of these provisions of the POCSO Act. This is as per the provisions of Sections 19 and 20 of the POCSO read with Rule 11 of the POCSO Rules which have a mandatory nature.
The due diligence under section 79 of the IT Act includes the removal of child pornographic content and immediate reporting of such content to the concerned police units in the manner specified under the POCSO Act and the Rules. In this way, the Supreme Court has broadened the Interpretation and scope of the ‘Due Diligence’ obligation under section 79 of the IT Act. It was also stated that is to be duly noted that merely because an intermediary complies with the IT Act, will not absolve it of any liability under the POCSO. This is unless it duly complies with the requirements and procedure set out under it, particularly Section 20 of the POCSO Act and Rule 11 of the POCSO Rules.
Bar on Judicial Use of the term ‘Child Porn’
Supreme Court found that the term child pornography can be trivialised as pornography is often seen as a consensual act between adults. Supreme Court emphasised using the term Child Sexual Exploitative and Abuse Material (CESAM) as it would emphasise the exploitation of children highlight the criminality of the act and shift the focus to a more robust framework to counter these crimes. The Supreme Court also stated that the Union of India should consider amending the POCSO Act to replace the "child pornography" term with "child sexual exploitative and abuse material" (CSEAM). This would reflect more accurately on the reality of such offences. Supreme Court also directed that the term "child pornography" shall not be used in any judicial order or judgment, and instead, the term "CSEAM" should be endorsed.
Curbing CSEAM Content on Social Media Platforms
Social Media Intermediaries and Expert Organisations play an important role in curbing CESAM content. Per the directions of the Apex Court, a need to impart positive age-appropriate sex education to prevent youth from engaging in harmful sexual behaviours, including the distribution, and viewing of CSEAM is important and all stakeholders must engage in proactive measures to counter these offences which are under the umbrella of CSEAM. This should entail promoting age-appropriated and lawful content on social media platforms and social media platforms to ensure compliance with applicable provisions.
Conclusion
In light of the Supreme Court’s landmark ruling, it is imperative to acknowledge the pressing necessity of establishing a safer online environment that shields children from exploitation. The shift towards using "Child Sexual Exploitative and Abuse Material" (CSEAM) emphasizes the severity of the crime and the need for a vigilant response. The social media intermediaries must respect their commitment to report and remove exploitive content and must ensure compliance with POCSO and IT regulations. Furthermore, comprehensive, age-appropriate sex education can also be used as a preventive measure, educating young people about the moral and legal ramifications of sexual offences, encouraging respect and awareness and ensuring safer cyberspace.
References
- https://www.scconline.com/blog/post/2024/09/23/storing-watching-child-pornography-crime-supreme-court-pocso-it-act/#:~:text=Supreme%20Court%3A%20The%20bench%20of,watching%20of%20such%20pornographic%20content
- https://timesofindia.indiatimes.com/india/supreme-court-viewing-child-porn-is-offence-under-pocso-it-acts/articleshow/113613572.cms
- https://bwlegalworld.com/article/dont-use-term-child-pornography-says-sc-urges-parliament-to-amend-pocso-act-534053
- https://indianexpress.com/article/india/child-pornography-law-pocso-it-supreme-court-9583376/

Introduction
The Telecom Regulatory Authority of India (TRAI) issued a consultation paper titled “Encouraging Innovative Technologies, Services, Use Cases, and Business Models through Regulatory Sandbox in Digital Communication Sector. The paper presents a draft sandbox structure for live testing of new digital communication products or services in a regulated environment. TRAI seeks comments from stakeholders on several parts of the framework.
What is digital communication?
Digital communication is the use of internet tools such as email, social media messaging, and texting to communicate with other people or a specific audience. Even something as easy as viewing the content on this webpage qualifies as digital communication.
Aim of Paper
- Frameworks are intended to support regulators’ desire for innovation while also ensuring economic resilience and consumer protection. Considering this, the Department of Telecom (DoT) asked TRAI to offer recommendations on a regulatory sandbox framework. TRAI approaches the issue with the goal of encouraging creativity and hastening the adoption of cutting-edge digital communications technologies.
- Artificial intelligence, the Internet of Things, edge computing, and other emerging technologies are revolutionizing how we connect, communicate, and access information, driving the digital communication sector to rapidly expand. To keep up with this dynamic environment, an enabling environment for the development and deployment of novel technologies, services, use cases, and business models is required.
- The regulatory sandbox concept is becoming increasingly popular around the world as a means of encouraging innovation in a range of industries. A regulatory sandbox is a regulated environment in which businesses and innovators can test their concepts, commodities, and services while operating under changing restrictions.
- Regulatory Sandbox will benefit the telecom startup ecosystem by providing access to a real-time network environment and other data, allowing them to evaluate the reliability of new applications before releasing them to the market. Regulatory Sandbox also attempts to stimulate cross-sectoral collaboration for carrying out such testing by engaging the assistance of other ministries and departments in order to give the starting company with a single window for acquiring all clearances.
What is regulatory sandbox?
- A regulatory sandbox is a controlled regulatory environment in which new products or services are tested in real-time.
- It serves as a “safe space” for businesses because authorities may or may not allow certain relaxations for the sole purpose of testing.
- The sandbox enables the regulator, innovators, financial service providers, and clients to perform field testing in order to gather evidence on the benefits and hazards of new financial innovations, while closely monitoring and mitigating their risks.
What are the advantages of having a regulatory sandbox?
- Firstly, regulators obtain first-hand empirical evidence on the benefits and risks of emerging technologies and their implications, allowing them to form an informed opinion on the regulatory changes or new regulations that may be required to support useful innovation while mitigating the associated risks.
- Second, sandbox customers can evaluate the viability of a product without the need for a wider and more expensive roll-out. If the product appears to have a high chance of success, it may be authorized and delivered to a wider market more quickly.
Digital communication sector and Regulatory Sandbox
- Many countries’ regulatory organizations have built sandbox settings for telecom tech innovation.
- These frameworks are intended to encourage regulators’ desire for innovation while also promoting economic resilience and consumer protection.
- In this context, the Department of Telecom (DoT) had asked TRAI to give recommendations on a regulatory sandbox framework.
- Written comments on the drafting framework will be received until July 17, 2023, and counter-comments will be taken until August 1, 2023. The Authority’s goal in the digital communication industry is to foster creativity and expedite the use of emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), and edge computing. These technologies are changing the way individuals connect, engage, and access information, causing rapid changes in the industry.
Conclusion
According to TRAI, these technologies are changing how individuals connect, engage, and obtain information, resulting in significant changes in the sector.
The regulatory sandbox also wants to stimulate cross-sectoral collaboration for carrying out such testing by engaging the assistance of other ministries and departments in order to give the starting company with a single window for acquiring all clearances. The consultation paper covers some of the worldwide regulatory sandbox frameworks in use in the digital communication industry, as well as some of the frameworks in use inside the country in other sectors.

Introduction
AI has revolutionized the way we look at growing technologies. AI is capable of performing complex tasks in fasten time. However, AI’s potential misuse led to increasing cyber crimes. As there is a rapid expansion of generative AI tools, it has also led to growing cyber scams such as Deepfake, voice cloning, cyberattacks targeting Critical Infrastructure and other organizations, and threats to data protection and privacy. AI is empowered by giving the realistic output of AI-created videos, images, and voices, which cyber attackers misuse to commit cyber crimes.
It is notable that the rapid advancement of technologies such as generative AI(Artificial Intelligence), deepfake, machine learning, etc. Such technologies offer convenience in performing several tasks and are capable of assisting individuals and business entities. On the other hand, since these technologies are easily accessible, cyber-criminals leverage AI tools and technologies for malicious activities or for committing various cyber frauds. By such misuse of advanced technologies such as AI, deepfake, and voice clones. Such new cyber threats have emerged.
What is Deepfake?
Deepfake is an AI-based technology. Deepfake is capable of creating realistic images or videos which in actuality are created by machine algorithms. Deepfake technology, since easily accessible, is misused by fraudsters to commit various cyber crimes or deceive and scam people through fake images or videos that look realistic. By using the Deepfake technology, cybercriminals manipulate audio and video content which looks very realistic but, in actuality, is fake. Voice cloning is also a part of deepfake. To create a voice clone of anyone's, audio can be deepfaked too, which closely resembles a real one but, in actuality, is a fake voice created through deepfake technology.
How Deepfake Can Harm Organizations or Enterprises?
- Reputation: Deepfakes have a negative impact on the reputation of the organization. It’s a reputation which is at stake. Fake representations or interactions between an employee and user, for example, misrepresenting CEO online, could damage an enterprise’s credibility, resulting in user and other financial losses. To be protective against such incidents of deepfake, organisations must thoroughly monitor online mentions and keep tabs on what is being said or posted about the brand. Deepfake-created content can also be misused to Impersonate leaders, financial officers and officials of the organisation.
- Misinformation: Deepfake can be used to spread misrepresentation or misinformation about the organisation by utilising the deepfake technology in the wrong way.
- Deepfake Fraud calls misrepresenting the organisation: There have been incidents where bad actors pretend to be from legitimate organisations and seek personal information. Such as helpline fraudsters, fake representatives from hotel booking departments, fake loan providers, etc., where bad actors use voice clones or deepfake-oriented fake video calls in order to propose themselves as belonging to legitimate organisations and, in actuality, they are deceiving people.
How can organizations combat AI-driven cybercrimes such as deepfake?
- Cybersecurity strategy: Organisations need to keep in place a wide range of cybersecurity strategies or use advanced tools to combat the evolving disinformation and misrepresentation caused by deepfake technology. Cybersecurity tools can be utilised to detect deepfakes.
- Social media monitoring: Social media monitoring can be done to detect any unusual activity. Organisations can select or use relevant tools and implement technologies to detect deepfakes and demonstrate media provenance. Real-time verification capabilities and procedures can be used. Reverse image searches, like TinEye, Google Image Search, and Bing Visual Search, can be extremely useful if the media is a composition of images.
- Employee Training: Employee education on cybersecurity will also play a significant role in strengthening the overall cybersecurity posture of the organisation.
Conclusion
There have been incidents where AI-driven tools or technology have been misused by cybercriminals or bad actors. Synthetic videos developed by AI are used by bad actors. Generative AI has gained significant popularity for many capabilities that produce synthetic media. There are concerns about synthetic media, such as its misuse of disinformation operations designed to influence the public and spread false information. In particular, synthetic media threats that organisations most often face include undermining the brand, financial gain, threat to the security or integrity of the organisation itself and Impersonation of the brand’s leaders for financial gain.
Synthetic media attempts to target organisations intending to defraud the organisation for financial gain. Example includes fake personal profiles on social networking sites and deceptive deepfake calls, etc. The organisation needs to have the proper cyber security strategy in place to combat such evolving threats. Monitoring and detection should be performed by the organisations and employee training on empowering on cyber security will also play a crucial role to effectively deal with evolving threats posed by the misuse of AI-driven technologies.
References:
- https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- https://www.securitymagazine.com/articles/98419-how-to-mitigate-the-threat-of-deepfakes-to-enterprise-organizations