#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs
.webp)
The world of Artificial Intelligence is entering a new phase with the rise of Agentic AI, often described as the third wave of AI evolution. Unlike earlier systems that relied on static models (that learn from the information that is fed) and reactive outputs, Agentic AI introduces intelligent agents that can make decisions, take initiative, and act autonomously in real time. These systems are designed to require minimal human oversight while actively collaborating and learning continuously. Such capabilities indicate an incoming shift, especially in the ways in which Indian businesses can function. For better understanding, Agentic AI is capable of streamlining operations, personalising services, and driving innovations at scale.
India and Agentic AI
Building as we go, India is making continuous strides in the AI revolution- deliberating on government frameworks, and simultaneously adapting. At Microsoft's Pinnacle 2025 summit in Hyderabad, India's pivotal role in shaping the future of Agentic AI was brought to the spotlight. With over 17 million developers on GitHub and ambitions to become the world's largest developer community by 2028, India's tech talent is gearing up to lead global AI innovations. Microsoft's Azure AI Foundry, also highlighted the country's growing influence in the AI landscape.
Indian companies are actively integrating Agentic AI into their operations to enhance efficiency and customer experience. Zomato is leveraging AI agents to optimise delivery logistics, ensuring timely and efficient service. Infosys has developed AI-driven copilots to assist developers in code generation, reducing development time, requiring fewer people to work on a particular project, and improving software quality.
As per a report by Deloitte, the Indian AI market is projected to grow potentially $20 billion by 2028. However, this is accompanied by significant challenges. 92% of Indian executives identify security concerns as the primary obstacle to responsible AI usage. Additionally, regulatory uncertainties and privacy risks associated with sensitive data were also highlighted.
Challenges in Adoption
Despite the enthusiasm, several barriers hinder the widespread adoption of Agentic AI in India:
- Skills Gap: While the AI workforce is expected to grow to 1.25 million by 2027, the current growth rate of 13% is considered to be insufficient with respect to the demands of the market.
- Data Infrastructure: Effective AI systems require robust, structured, and accessible datasets. Many organisations lack the necessary data maturity, leading to flawed AI outputs and decision-making failures.
- Trust and Governance: Building trust in AI systems is crucial. Concerns over data privacy, ethical usage, and regulatory compliance require robust governance frameworks to ensure the adoption of AI in a responsible manner.
- Looming fear of job loss: As AI continues to take up more sophisticated roles, a general feeling of hesitancy with respect to the loss of employment/human labour might come in the way of adopting such measures.
- Outsourcing: Currently, most companies prefer outsourcing or buying AI solutions rather than building them in-house. This gives rise to the issue of adapting to evolving needs.
The Road Ahead
To fully realise the potential of Agentic AI, India must address the following challenges :
- Training the Workforce: Initiatives and workshops tailored for employees that provide AI training can prove to be helpful. Some relevant examples are Microsoft’s commitment to provide AI training to 2 million individuals by 2025 and Infosys's in-house AI training programs.
- Data Readiness: Investing in modern data infrastructure and promoting data literacy are essential to improve data quality and accessibility.
- Establishing Governance Frameworks: Developing clear regulatory guidelines and ethical standards will foster trust and facilitate responsible AI adoption. Like the IndiaAI mission, efforts regarding evolving technology and keeping up with it are imperative.
Agentic AI holds unrealised potential to transform India's business landscape when coupled with innovation and a focus on quality that enhances global competitiveness. India is at a position where by proactively addressing the existing challenges, this potential can be realised and set the foundation for a new technological revolution (along with in-house development), solidifying its position as a global AI leader.
References
- https://economictimes.indiatimes.com/tech/artificial-intelligence/india-facing-shortage-of-agentic-ai-professionals-amid-surge-in-demand/articleshow/120651512.cms?from=mdr
- https://economictimes.indiatimes.com/tech/artificial-intelligence/india-a-global-leader-in-agentic-ai-adoption-deloitte-report/articleshow/119906474.cms?from=mdr
- https://inc42.com/features/from-zomato-to-infosys-why-indias-biggest-companies-are-betting-on-agentic-ai/
- https://www.hindustantimes.com/india-news/agentic-ai-next-big-leap-in-workplace-automation-101742548406693.html
- https://www.deloitte.com/in/en/about/press-room/india-rides-the-agentic-ai-wave.html
- https://www.businesstoday.in/tech-today/news/story/ais-next-chapter-starts-in-india-microsoft-champions-agentic-ai-at-pinnacle-2025-474286-2025-05-01
- https://www.hindustantimes.com/opinion/calm-before-ai-storm-a-moment-to-prepare-101746110985736.html
- https://www.financialexpress.com/life/technology/why-agentic-ai-is-the-next-big-thing/3828357/

Introduction
Google Play has announced its new policy which will ensure trust and transparency on google play by providing a new framework for developer verification and app details. The new policy requires that new developer accounts on Google Play will have to provide a D-U-N-S number to verify the business. So when an organisation will create a new Play Console developer account the organisation will need to provide a D-U-N-S number. Which is a nine-digit unique identifier which will be used to verify their business. The new google play policy aims to enhance user trust. And the developer will provide detailed developer details on the app’s listing page. Users will get to know who is behind the app which they are installing.
Verifying Developer Identity with D-U-N-S Numbers
To boost security the google play new policy requires the developer account to provide the D-U-N-S number when creating a new Play Console developer account. The D-U-N-S number assigned by Dun & Bradstreet will be used to verify the business. Once the developer creates his new Play Console developer account by providing a D-U-N-S number, Google Play will verify the developer’s details, and he will be able to start publishing the apps. Through this step, Google Play aims to validate the business information in a more authentic way.
If your organisation does not have a D-U-N-S number, you may check on or request for it for free on this website (https://www.dnb.com/duns-number/lookup.html). The request process for D-U-N-S can take up to 30 days. Developers are also required to keep the information up to date.
Building User Trust with Enhanced App Details
In addition to verifying developer identities in a more efficient way, google play also requires that developer provides sufficient app details to the users. There will be an “App Support” section on the app’s store listing page, where the developer will display the app’s support email address and even can include their website and phone number for support.
The new section “About the developer” will also be introduced to provide users with verified identity information, including the developer’s name, address, and contact details. Which will make the users more informed about the valuable information of the app developers.
Key highlights of the Google Play Polic
- Google Play came up with the policy to keep the platform safe by verifying the developers’ identity and it will also help to reduce the spread of malware apps and help the users to make confident informed decisions about the apps they download. Google Play announced the policy by expanding its developer verification requirement to strengthen Google Play as a platform and build user trust. When you create a new Play Console Developer account and choose organisation as your account type you will now need to provide a D-U-N-S number.
- Users will get detailed information about the developers’ identities and contact information, building more transparency and encouraging responsible app development practices.
- This policy will enable the users to make informed choices about the apps they download.
- The new “App support” section will provide enhanced communication between users and developers by displaying support email addresses, website and support phone numbers, streamlining the support process and user satisfaction.
Timeline and Implementation
The new policy requirements for D-U-N-S numbers will start rolling out on 31 August 2023 for all new Play Console developer accounts. The “About the developer” section will be visible to users as soon as a new app is published. and In October 2023, existing developers will also be required to update and verify their existing accounts to comply with the new verification policy.
Conclusion
Google Play’s new policy will aim to enhance the more transparent app ecosystem. This new policy will provide the users with more information about the developers. Google Play aims to establish a platform where users can confidently discover and download apps. This new policy will enhance the user experience on google play in terms of a reliable and trustworthy platform.

Introduction
The Central Board of Secondary Education (CBSE) has issued a warning to students about fake social media accounts that spread false information about the CBSE. The board has warned students not to trust the information coming from these accounts and has released a list of 30 fake accounts. The board has expressed concern that these handles are misleading students and parents by spreading fake information with the name and logo of the CBSE. The board has has also clarified that it is not responsible for the information being spread from these fake accounts.
The Central Board of Secondary Education (CBSE), a venerable institution in the realm of Indian education, has found itself ensnared in the web of cyber duplicity. Impersonation attacks, a sinister facet of cybercrime, have burgeoned, prompting the Board to adopt a vigilant stance against the proliferation of counterfeit social media handles that masquerade under its esteemed name and emblem.
The CBSE, has revealed a list of approximately 30 spurious handles that have been sowing seeds of disinformation across the social media landscape. These digital doppelgängers, cloaked in the Board's identity, have been identified and exposed. The Board's official beacon in this murky sea of falsehoods is the verified handle '@cbseindia29', a lighthouse guiding the public to the shores of authentic information.
This unfolding narrative signifies the Board's unwavering commitment to tackle the scourge of misinformation and to fortify the bulwarks safeguarding the sanctity of its official communications. By spotlighting the rampant growth of fake social media personas, the CBSE endeavors to shield the public from the detrimental effects of misleading information and to preserve the trust vested in its official channels.
CBSE Impersonator Accounts
The list of identified malefactors, parading under the CBSE banner, serves as a stark admonition to the public to exercise discernment while navigating the treacherous waters of social media platforms. The CBSE has initiated appropriate legal manoeuvres against these unauthorised entities to stymie their dissemination of fallacious narratives.
The Board has previously unfurled comprehensive details concerning the impending board examinations for both Class 10 and Class 12 in the year 2024. These academic assessments are slated to commence from February 15 to April 2, 2024, with a uniform start time of 10:30 AM (IST) across all designated dates.
The CBSE has made it unequivocally clear that there are nefarious entities lurking in the shadows of social media, masquerading in the guise of the CBSE. It has implored students and the general public not to be ensnared by the siren songs emanating from these fraudulent accounts and has also unfurled a list of these imposters. The Board's warning is a beacon of caution, illuminating the path for students as they navigate the digital expanse with the impending commencement of the CBSE Class X and XII exams.
Sounding The Alarm
The Central Board of Secondary Education (CBSE) has sounded the alarm, issuing an advisory to schools, students, and their guardians about the existence of fake social media platform handles that brandish the board’s logo and mislead the academic community. The board has identified about 30 such accounts on the microblogging site 'X' (formerly known as Twitter) that misuse the CBSE logo and acronym, sowing confusion and disarray.
The board is in the process of taking appropriate action against these deceptive entities. CBSE has also stated that it bears no responsibility for any information disseminated by any other source that unlawfully appropriates its name and logo on social media platforms.
Sources reveal that these impostors post false information on various updates, including admissions and exam schedules. After receiving complaints about such accounts on 'X', the CBSE issued the advisory and has initiated action against those operating these accounts, sources said.
The Brute Nature of Impersonation
In the contemporary digital epoch, cybersecurity has ascended to a position of critical importance. It is the bulwark that ensures the sanctity of computer networks is maintained and that computer systems are not marked as prey by cyber predators. Cyberattacks are insidious stratagems executed with the intent of expropriating, manipulating, or annihilating authenticated user or organizational data. It is imperative that cyberattacks be mitigated at their roots so that users and organizations utilizing internet services can navigate the digital domain with a sense of safety and security. Knowledge about cyberattacks thus plays a pivotal role in educating cyber users about the diverse types of cyber threats and the preventive measures to counteract them.
Impersonation Attacks are a vicious form of cyberattack, characterised by the malicious intent to extract confidential information. These attacks revolve around a process where cyber attackers eschew the use of malware or bots to perpetrate their crimes, instead wielding the potent tactic of social engineering. The attacker meticulously researches and harvests information about the legitimate user through platforms such as social media and then exploits this information to impersonate or masquerade as the original, legitimate user.
The threats posed by Impersonation Attacks are particularly insidious because they demand immediate action, pressuring the victim to act without discerning between the authenticated user and the impersonated one. The very nature of an Impersonation Attack is a perilous form of cyber assault, as the original user who is impersonated holds rights to private information. These attacks can be executed by exploiting a resemblance to the original user's identity, such as email IDs. Email IDs with minute differences from the legitimate user are employed in this form of attack, setting it apart from the phishing cyber mechanism. The email addresses are so similar and close to each other that, without paying heed or attention to them, the differences can be easily overlooked. Moreover, the email addresses appear to be correct, as they generally do not contain spelling errors.
Strategies to Prevent
To prevent Impersonation Attacks, the following strategies can be employed:
- Proper security mechanisms help identify malicious emails and thereby filter spamming email addresses on a regular basis.
- Double-checking sensitive information is crucial, especially when important data or funds need to be transferred. It is vital to ensure that the data is transferred to a legitimate user by cross-verifying the email address.
- Ensuring organizational-level security is paramount. Organizations should have specific domain names assigned to them, which can help employees and users distinguish their identity from that of cyber attackers.
- Protection of User Identity is essential. Employees must not publicly share their private identities, which can be exploited by attackers to impersonate their presence within the organization.
Conclusion
The CBSE's struggle against the masquerade of misinformation is a reminder of the vigilance required to safeguard the legitimacy of our digital interactions. As we navigate the complex and uncharted terrain of the internet, let us arm ourselves with the knowledge and discernment necessary to unmask these digital charlatans and uphold the sanctity of truth.
References
- https://timesofindia.indiatimes.com/city/ahmedabad/cbse-warns-against-misuse-of-its-name-by-fake-social-media-handles/articleshow/107644422.cms
- https://www.timesnownews.com/education/cbse-releases-list-of-fake-social-media-handles-asks-not-to-follow-article-107632266
- https://www.etvbharat.com/en/!bharat/cbse-public-advisory-enn24021205856