#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

Introduction
Google.org is committed to stepping ahead to enhance Internet safety and responsible online behaviour. ‘Google for INDIA 2023’, an innovative conclave, took place on 19th October 2023. Google.org has embarked on its vision for a safer Internet and combating misinformation, financial frauds and other threats that come from bad actors. Alphabet Big Tech is committed to leading this charter and engaging with all stakeholders, including government agencies. Google.org has partnered with CyberPeace Foundation to foster a safer online environment and empower users on informed decisions on the Internet. CyberPeace will run a nationwide awareness and capacity-building Initiative equipping more than 40 Million Indian netizens with fact-checking techniques, tools, SoPs, and guidance for responsible and safe online behaviour. The campaign will be deployed in 15 Indian regional languages as a comprehensive learning outcome for the whole nation. Together, Google.org and CyberPeace Foundation aim to make the Internet safer for everyone and work in a direction to ensure that progress for everyone is built on a strong foundation of trusted information available on the Internet and pursuing the true spirit of “Technology for Good”.
Google.org and CyberPeace together for enhanced online safety
A new $4 million grant to CyberPeace Foundation will support a nationwide awareness-building program and comprehensive multilingual digital resource hub with content available in up to 15 Indian languages to empower nearly 40 million underserved people across the country in building resilience against misinformation and practice responsible online behaviour. Together, Google.org and CyberPeace are on their way to creating a strong pathway of trusted Internet and a safer digital environment. The said campaign will be undertaken for a duration of 3 years, and the following key components will run at the core of the same:
- CyberPeace Corps Volunteers: This will be a pan India volunteer engagement initiative to create a community of 9 million CyberPeace Ambassadors/First Responders/Volunteers to fight misinformation and promote responsible online behaviour going far into the rural, marginalised and most vulnerable strata of society.
- Digital Resource Hub: In pursuance of the campaign, CyberPeace is developing a cutting-edge platform offering a wealth of resources on media literacy, responsible content creation, and cyber hygiene translated into 15 Indian regional languages for a widespread impact on the ground.
- Public Sensitisation: CyberPeace will be conducting an organic series of online and offline events focusing on empowering netizens to discern fact from fiction. These sensitisation drives will be taken on by start master trainers from different regions of India to ensure all states and UTs are impacted.
- CyberPeace Quick Reaction Team: A specialised team of tech enthusiasts that will work closely with platforms to rapidly address new-age cyber threats and misinformation campaigns in real-time and establish best practices and SoPs for the diverse elements in the industries.
- Engaging Multimedia Content: With CyberPeace’s immense expertise in E-Course and digital content, the campaign will produce a range of multilingual multimedia resources, including informative videos, posters, games, contests, infographics, and more.
- Fact-check unit: Fact-check units will play a crucial role in monitoring, identifying, and doing fact analysis of the suspected information and overall busting the growing incidents of misinformation. Fake news or misinformation has negative consequences on society at large. The fact-check units play a significant role in controlling the widespread of misinformation.
Fight Against Misinformation
Misinformation is rampant all across the world and requires attention. With the increasing penetration of social media and the internet, this remains a global issue. Google.org has taken up the initiative to address this issue in India and, in collaboration with CyberPeace Foundation taken a proactive step to multiple avenues for mass-scale awareness and upskilling campaigns have been piloted to make an impact on the ground with the vision of upskilling over 40 Million people in the country and building resilience against misinformation and practicing responsible online behavior.
Maj Vineet Kumar, Founder of CyberPeace, said,
"In an era in which digital is deeply intertwined with our lives, knowing how to discern, act on, and share the credible from the wealth of information available online is critical to our well-being, and of our families and communities. Through this initiative, we’re committing to help Internet users across India become informed, empowered and responsible netizens leading through conversations and actions. Whether it’s in fact-checking information before sharing it, or refraining from sharing unverified news, we all play an important role in building a web that is a safe and inclusive space for everyone, and we are extremely grateful to Google.org for propelling us forward in this mission with their grant support.”
Annie Lewin, Senior Director of Global Advocacy and Head of Asia Pacific, Google.org said:
“We have a longstanding commitment to supporting changemakers using technology to solve humanity's biggest challenges. And, the innovation and zeal of Indian nonprofit organisations has inspired us to deepen our commitment in India. With the new grant to CyberPeace Foundation, we are proud to support solutions that speak directly to Google’s DNA, helping first-time internet users chart their path in a digital world with confidence. Such solutions give us pride and hope that each step, built on a strong foundation of trusted information, will translate into progress for all.”
Conclusion
Google.org has partnered with government agencies and other Indian organisations with the vision of future-proof India for digital public infrastructure and staying a step ahead for Internet safety, keeping the citizens safe online. Google.org is taking its largest step yet towards online safety in India. There is widespread misinformative content and information in the digital media space or on the internet. This proactive initiative of Google.org in collaboration with CyberPeace is a commendable step to prevent the spread of misinformation and empower users to act responsibly while sharing any information and making informed decisions while using the Internet, hence creating a safe digital environment for everyone.
References:
- https://www.youtube.com/live/-b4lTVjOsXY?feature=shared
- https://blog.google/intl/en-in/products/google-for-india-2023-product-announcements/
- https://blog.google/intl/en-in/partnering-indias-success-in-a-new-digital-paradigm/
- https://telecom.economictimes.indiatimes.com/news/internet/google-to-debut-credit-in-india-announces-a-slew-of-ai-powered-launches/104547623
- https://theprint.in/economy/google-for-india-2023-tech-giant-says-it-removed-2-million-violative-videos-in-q2-2023/1810201/

India is the world's largest democracy, and conducting free and fair elections is a mammoth task shouldered by the Election Commission of India. But technology is transforming every aspect of the electoral process in the digital age, with Artificial Intelligence (AI) being integrated into campaigns, voter engagement, and election monitoring. In the upcoming Bihar elections of 2025, all eyes are on how the use of AI will influence the state polls and the precedent it will set for future elections.
Opportunities: Harnessing AI for Better Elections
Breaking Language Barriers with AI:
AI is reshaping political outreach by making speeches accessible in multiple languages. At the Kashi Tamil Sangamam in 2024, the PM’s Hindi address was AI-dubbed in Tamil in real time. Since then, several speeches have been rolled out in eight languages, ensuring inclusivity and connecting with voters beyond Hindi-speaking regions more effectively.
Monitoring and Transparency
During Bihar’s Panchayat polls, the State Election Commission used Staqu’s JARVIS, an AI-powered system that connects with CCTV cameras to monitor EVM screens in real time. By reducing human error, JARVIS brought greater accuracy, speed, and trust to the counting process.
AI for Information Access on Public Service Delivery
NaMo AI is a multilingual chatbot that citizens can use to inquire about the details of public services. The feature aims to make government schemes easy to understand, transparent, and help voters connect directly with the policies of the government.
Personalised Campaigning
AI is transforming how campaigns connect with voters. By analysing demographics and social media activity, AI builds detailed voter profiles. This helps craft messages that feel personal, whether on WhatsApp, a robocall, or a social media post, ensuring each group hears what matters most to them. This aims to make political outreach sharper and more effective.
Challenges: The Dark Side of AI in Elections
Deepfakes and Disinformation
AI-powered deepfakes create hyper-realistic videos and audio that are nearly impossible to distinguish from the real. In elections, they can distort public perception, damage reputations, or fuel disharmony on social media. There is a need for mandatory disclaimers stating when content is AI-generated, to ensure transparency and protect voters from manipulative misinformation.
Data Privacy and Behavioural Manipulation
Cambridge Analytica’s consulting services, provided by harvesting the data of millions of users from Facebook without their consent, revealed how personal data can be weaponised in politics. This data was allegedly used to “microtarget” users through ads, which could influence their political opinions. Data mining of this nature can be supercharged through AI models, jeopardising user privacy, trust, safety, and casting a shadow on democratic processes worldwide.
Algorithmic Bias
AI systems are trained on datasets. If the datasets contain biases, AI-driven tools could unintentionally reinforce stereotypes or favor certain groups, leading to unfair outcomes in campaigning or voter engagement.
The Road Ahead: Striking a Balance
The adoption of AI in elections opens a Pandora's box of uncertainties. On the one hand, it offers solutions for breaking language barriers and promoting inclusivity. On the other hand, it opens the door to manipulation and privacy violations.
To counter risks from deepfakes and synthetic content, political parties are now advised to clearly label AI-generated materials and add disclaimers in their campaign messaging. In Delhi, a nodal officer has even been appointed to monitor social media misuse, including the circulation of deepfake videos during elections. The Election Commission of India constantly has to keep up with trends and tactics used by political parties to ensure that elections remain free and fair.
Conclusion
With Bihar’s pioneering experiments with JARVIS in Panchayat elections to give vote counting more accuracy and speed, India is witnessing both sides of this technological revolution. The challenge lies in ensuring that AI strengthens democracy rather than undermining it. Deepfakes algorithms, bias, and data misuse remind us of the risk of when technology oversteps. The real challenge is to strike the right balance in embracing AI for elections to enhance inclusivity and transparency, while safeguarding trust, privacy, and the integrity of democratic processes.
References
- https://timesofindia.indiatimes.com/india/how-ai-is-rewriting-the-rules-of-election-campaign-in-india/articleshow/120848499.cms#
- https://m.economictimes.com/news/elections/lok-sabha/india/2024-polls-stand-out-for-use-of-ai-to-bridge-language-barriers/articleshow/108737700.cms
- https://www.ndtv.com/india-news/namo-ai-on-namo-app-a-unique-chatbot-that-will-answer-everything-on-pm-modi-govt-schemes-achievements-5426028
- https://timesofindia.indiatimes.com/gadgets-news/staqu-deploys-jarvis-to-facilitate-automated-vote-counting-for-bihar-panchayat-polls/articleshow/87307475.cms
- https://www.drishtiias.com/daily-updates/daily-news-editorials/deepfakes-in-elections-challenges-and-mitigation
- https://internetpolicy.mit.edu/blog-2018-fb-cambridgeanalytica/
- https://www.deccanherald.com/elections/delhi/delhi-assembly-elections-2025-use-ai-transparently-eci-issues-guidelines-for-political-parties-3357978#

Introduction
The digital realm is evolving at a rapid pace, revolutionising cyberspace at a breakneck speed. However, this dynamic growth has left several operational and regulatory lacunae in the fabric of cyberspace, which are exploited by cybercriminals for their ulterior motives. One of the threats that emerged rapidly in 2024 is proxyjacking, in which vulnerable systems are exploited by cyber criminals to sell their bandwidth to third-party proxy servers. This cyber threat poses a significant threat to organisations and individual servers.
Proxyjacking is a kind of cyber attack that leverages legit bandwidth sharing services such as Peer2Profit and HoneyGain. These are legitimate platforms but proxyjacking occurs when such services are exploited without user consent. These services provide the opportunity to monetize their surplus internet bandwidth by sharing with other users. The model itself is harmless but provides an avenue for numerous cyber hostilities. The participants install net-sharing software and add the participating system to the proxy network, enabling users to route their traffic through the system. This setup intends to enhance privacy and provide access to geo-locked content.
The Modus Operandi
These systems are hijacked by cybercriminals, who sell the bandwidth of infected devices. This is achieved by establishing Secure Shell (SSH) connections to vulnerable servers. While hackers rarely use honeypots to render elaborate scams, the technical possibility of them doing so cannot be discounted. Cowrie Honeypots, for instance, are engineered to emulate UNIX systems. Attackers can use similar tactics to gain unauthorized access to poorly secured systems. Once inside the system, attackers utilise legit tools such as public docker images to take over proxy monetization services. These tools are undetectable to anti-malware software due to being genuine software in and of themselves. Endpoint detection and response (EDR) tools also struggle with the same threats.
The Major Challenges
Limitation Of Current Safeguards – current malware detection software is unable to distinguish between malicious and genuine use of bandwidth services, as the nature of the attack is not inherently malicious.
Bigger Threat Than Crypto-Jacking – Proxyjacking poses a bigger threat than cryptojacking, where systems are compromised to mine crypto-currency. Proxyjacking uses minimal system resources rendering it more challenging to identify. As such, proxyjacking offers perpetrators a higher degree of stealth because it is a resource-light technique, whereas cryptojacking can leave CPU and GPU usage footprints.
Role of Technology in the Fight Against Proxyjacking
Advanced Safety Measures- Implementing advanced safety measures is crucial in combating proxyjacking. Network monitoring tools can help detect unusual traffic patterns indicative of proxyjacking. Key-based authentication for SSH can significantly reduce the risk of unauthorized access, ensuring that only trusted devices can establish connections. Intrusion Detection Systems and Intrusion Prevention Systems can go a long way towards monitoring unusual outbound traffic.
Robust Verification Processes- sharing services must adopt robust verification processes to ensure that only legitimate users are sharing bandwidth. This could include stricter identity verification methods and continuous monitoring of user activities to identify and block suspicious behaviour.
Policy Recommendations
Verification for Bandwidth Sharing Services – Mandatory verification standards should be enforced for bandwidth-sharing services, including stringent Know Your Customer (KYC) protocols to verify the identity of users. A strong regulatory body would ensure proper compliance with verification standards and impose penalties. The transparency reports must document the user base, verification processes and incidents.
Robust SSH Security Protocols – Key-based authentication for SSH across organisations should be mandated, to neutralize the risk of brute force attacks. Mandatory security audits of SSH configuration within organisations to ensure best practices are complied with and vulnerabilities are identified will help. Detailed logging of SSH attempts will streamline the process of identification and investigation of suspicious behaviour.
Effective Anomaly Detection System – Design a standard anomaly detection system to monitor networks. The industry-wide detection system should focus on detecting inconsistencies in traffic patterns indicating proxy-jacking. Establishing mandatory protocols for incident reporting to centralised authority should be implemented. The system should incorporate machine learning in order to stay abreast with evolving attack methodologies.
Framework for Incident Response – A national framework should include guidelines for investigation, response and remediation to be followed by organisations. A centralized database can be used for logging and tracking all proxy hacking incidents, allowing for information sharing on a real-time basis. This mechanism will aid in identifying emerging trends and common attack vectors.
Whistleblower Incentives – Enacting whistleblower protection laws will ensure the proper safety of individuals reporting proxyjacking activities. Monetary rewards provide extra incentives and motivate individuals to join whistleblowing programs. To provide further protection to whistleblowers, secure communication channels can be established which will ensure full anonymity to individuals.
Conclusion
Proxyjacking represents an insidious and complicated threat in cyberspace. By exploiting legitimate bandwidth-sharing services, cybercriminals can profit while remaining entirely anonymous. Addressing this issue requires a multifaceted approach, including advanced anomaly detection systems, effective verification systems, and comprehensive incident response frameworks. These measures of strong cyber awareness among netizens will ensure a healthy and robust cyberspace.
References
- https://gridinsoft.com/blogs/what-is-proxyjacking/
- https://www.darkreading.com/cyber-risk/ssh-servers-hit-in-proxyjacking-cyberattacks
- https://therecord.media/hackers-use-log4j-in-proxyjacking-scheme