#FactCheck - Debunking Manipulated Photos of Smiling Secret Service Agents During Trump Assassination Attempt
Executive Summary:
Viral pictures featuring US Secret Service agents smiling while protecting former President Donald Trump during a planned attempt to kill him in Pittsburgh have been clarified as photoshopped pictures. The pictures making the rounds on social media were produced by AI-manipulated tools. The original image shows no smiling agents found on several websites. The event happened with Thomas Mathew Crooks firing bullets at Trump at an event in Butler, PA on July 13, 2024. During the incident one was deceased and two were critically injured. The Secret Service stopped the shooter, and circulating photos in which smiles were faked have stirred up suspicion. The verification of the face-manipulated image was debunked by the CyberPeace Research Team.

Claims:
Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.



Fact Check:
Upon receiving the posts, we searched for any credible source that supports the claim made, we found several articles and images of the incident but in those the images were different.

This image was published by CNN news media, in this image we can see the US Secret Service protecting Donald Trump but not smiling. We then checked for AI Manipulation in the image using the AI Image Detection tool, True Media.


We then checked with another AI Image detection tool named, contentatscale AI image detection, which also found it to be AI Manipulated.

Comparison of both photos:

Hence, upon lack of credible sources and detection of AI Manipulation concluded that the image is fake and misleading.
Conclusion:
The viral photos claiming to show Secret Service agents smiling when protecting former President Donald Trump during an assassination attempt have been proven to be digitally manipulated. The original image found on CNN Media shows no agents smiling. The spread of these altered photos resulted in misinformation. The CyberPeace Research Team's investigation and comparison of the original and manipulated images confirm that the viral claims are false.
- Claim: Viral photos allegedly show United States Secret Service agents smiling while rushing to protect former President Donald Trump during an attempted assassination in Pittsburgh, Pennsylvania.
- Claimed on: X, Thread
- Fact Check: Fake & Misleading
Related Blogs

In the vast, interconnected cosmos of the internet, where knowledge and connectivity are celebrated as the twin suns of enlightenment, there lurk shadows of a more sinister nature. Here, in these darker corners, the innocence of childhood is not only exploited but also scarred, indelibly and forever. The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have surged to alarming levels globally, casting a long, ominous shadow over the digital landscape.
In response to this pressing issue, the National Human Rights Commission (NHRC) has unfurled a comprehensive four-part advisory, a beacon of hope aimed at combating CSAM and safeguarding the rights of children in this digital age. This advisory dated 27/10/23 is not merely a reaction to the rising tide of CSAM, but a testament to the imperative need for constant vigilance in the realm of cyber peace.
The statistics paint a sobering picture. In 2021, more than 1,500 instances of publishing, storing, and transmitting CSAM were reported, shedding a harsh light on the scale of the problem. Even more alarming is the upward trend in cases reported in subsequent years. By 2023, a staggering 450,207 cases of CSAM had already been reported, marking a significant increase from the 204,056 and 163,633 cases reported in 2022 and 2021, respectively.
The Key Aspects of Advisory
The NHRC's advisory commences with a fundamental recommendation - a redefinition of terminology. It suggests replacing the term 'Child Pornography' with 'Child Sexual Abuse Material' (CSAM). This shift in language is not merely semantic; it underscores the gravity of the issue, emphasizing that this is not about pornography but child abuse.
Moreover, the advisory calls for the definition of 'sexually explicit' under Section 67B of the IT Act, 2000. This step is crucial for ensuring the prompt identification and removal of online CSAM. By giving a clear definition, law enforcement can act swiftly in removing such content from the internet.
The digital world knows no borders, and CSAM can easily cross jurisdictional lines. NHRC recognizes this challenge and proposes that laws be harmonized across jurisdictions through bilateral agreements. Moreover, it recommends pushing for the adoption of a UN draft Convention on 'Countering the Use of Information and Communications Technologies for Criminal Purposes' at the General Assembly.
One of the critical aspects of the advisory is the strengthening of law enforcement. NHRC advocates for the creation of Specialized State Police Units in every state and union territory to handle CSAM-related cases. The central government is expected to provide support, including grants, to set up and equip these units.
The NHRC further recommends establishing a Specialized Central Police Unit under the government of India's jurisdiction. This unit will focus on identifying and apprehending CSAM offenders and maintaining a repository of such content. Its role is not limited to law enforcement; it is expected to cooperate with investigative agencies, analyze patterns, and initiate the process for content takedown. This coordinated approach is designed to combat the problem effectively, both on the dark web and open web.
The role of internet intermediaries and social media platforms in controlling CSAM is undeniable. The NHRC advisory emphasizes that intermediaries must deploy technology, such as content moderation algorithms, to proactively detect and remove CSAM from their platforms. This places the onus on the platforms to be proactive in policing their content and ensuring the safety of their users.
New Developments
Platforms using end-to-end encryption services may be required to create additional protocols for monitoring the circulation of CSAM. Failure to do so may invite the withdrawal of the 'safe harbor' clause under Section 79 of the IT Act, 2000. This measure ensures that platforms using encryption technology are not inadvertently providing safe havens for those engaged in illegal activities.
NHRC's advisory extends beyond legal and law enforcement measures; it emphasizes the importance of awareness and sensitization at various levels. Schools, colleges, and institutions are called upon to educate students, parents, and teachers about the modus operandi of online child sexual abusers, the vulnerabilities of children on the internet, and the early signs of online child abuse.
To further enhance awareness, a cyber curriculum is proposed to be integrated into the education system. This curriculum will not only boost digital literacy but also educate students about relevant child care legislation, policies, and the legal consequences of violating them.
NHRC recognizes that survivors of CSAM need more than legal measures and prevention strategies. Survivors are recommended to receive support services and opportunities for rehabilitation through various means. Partnerships with civil society and other stakeholders play a vital role in this aspect. Moreover, psycho-social care centers are proposed to be established in every district to facilitate need-based support services and organization of stigma eradication programs.
NHRC's advisory is a resounding call to action, acknowledging the critical importance of protecting children from the perils of CSAM. By addressing legal gaps, strengthening law enforcement, regulating online platforms, and promoting awareness and support, the NHRC aims to create a safer digital environment for children.
Conclusion
In a world where the internet plays an increasingly central role in our lives, these recommendations are not just proactive but imperative. They underscore the collective responsibility of governments, law enforcement agencies, intermediaries, and society as a whole in safeguarding the rights and well-being of children in the digital age.
NHRC's advisory is a pivotal guide to a more secure and child-friendly digital world. By addressing the rising tide of CSAM and emphasizing the need for constant vigilance, NHRC reaffirms the critical role of organizations, governments, and individuals in ensuring cyber peace and child protection in the digital age. The active contribution from premier cyber resilience firms like Cyber Peace Foundation, amplifies the collective action forging a secure digital space, highlighting the pivotal role played by think tanks in ensuring cyber peace and resilience.
References:
- https://www.hindustantimes.com/india-news/nhrc-issues-advisory-regarding-child-sexual-abuse-material-on-internet-101698473197792.html
- https://ssrana.in/articles/nhrcs-advisory-proliferation-of-child-sexual-abuse-material-csam/
- https://theprint.in/india/specialised-central-police-unit-use-of-technology-to-proactively-detect-csam-nhrc-advisory/1822223/

Artificial intelligence is revolutionizing industries such as healthcare to finance to influence the decisions that touch the lives of millions daily. However, there is a hidden danger associated with this power: unfair results of AI systems, reinforcement of social inequalities, and distrust of technology. One of the main causes of this issue is training data bias, which appears when the examples on which an AI model is trained are not representative or skewed. To deal with it successfully, this needs a combination of statistical methods, algorithmic design that is mindful of fairness, and robust governance over the AI lifecycle. This article discusses the origin of bias, the ways to reduce it, and the unique position of fairness-conscious algorithms.
Why Bias in Training Data Matters
The bias in AI occurs when the models mirror and reproduce the trends of inequality in the training data. When a dataset has a biased representation of a demographic group or includes historical biases, the model will be trained to make decisions in ways that will harm the group. This is a fact that has a practical implication: prejudiced AI may cause discrimination during the recruitment of employees, lending, and evaluation of criminal risks, as well as various other spheres of social life, thus compromising justice and equity. These problems are not only technical in nature but also require moral principles and a system of governance (E&ICTA).
Bias is not uniform. It may be based on the data itself, the algorithm design, or even the lack of diversity among developers. The bias in data occurs when data does not represent the real world. Algorithm bias may arise when design decisions inadvertently put one group at an unfair advantage over another. Both the interpretation of the model and data collection may be affected by human bias. (MDPI)
Statistical Principles for Reducing Training Data Bias
Statistical principles are at the core of bias mitigation and they redefine the data-model interaction. These approaches are focused on data preparation, training process adjustment, and model output corrections in such a way that the notion of fairness becomes a quantifiable goal.
Balancing Data Through Re-Sampling and Re-Weighting
Among the aforementioned methods, a fair representation of all the relevant groups in the dataset is one way. This can be achieved by oversampling underrepresented groups and undersampling overrepresented groups. Oversampling gives greater weight to minority examples, whereas re-weighting gives greater weight to under-represented data points in training. The methods minimize the tendency of models to fit to salient patterns and improve coverage among vulnerable groups. (GeeksforGeeks)
Feature Engineering and Data Transformation
The other statistical technique is to convert data characteristics in such a way that sensitive characteristics have a lesser impact on the results. In one example, fair representation learning adjusts the data representation to discourage bias during the untraining of the model. The disparate impact remover adjust technique performs the adjustment of features of the model in such a way that the impact of sensitive features is reduced during learning. (GeeksforGeeks)
Measuring Fairness With Metrics
Statistical fairness measures are used to measure the effectiveness of a model in groups.
Fairness-Aware Algorithms Explained
Fair algorithms do not simply detect bias. They incorporate fairness goals in model construction and run in three phases including pre-processing, in-processing, and post-processing.
Pre-Processing Techniques
Fairness-aware pre-processing deals with bias prior to the model consuming the information. This involves the following ways:
- Rebalancing training data through sampling and re-weighting training data to address sample imbalances.
- Data augmentation to generate examples of underrepresented groups.
- Feature transformation removes or downplays the impact of sensitive attributes prior to the commencement of training. (IJMRSET)
These methods can be used to guarantee that the model is trained on more balanced data and to reduce the chances of bias transfer between historical data.
In-Processing Techniques
The in-processing techniques alter the learning algorithm. These include:
- Fairness constraints that penalize the model for making biased predictions during training.
- Adversarial debiasing, where a second model is used to ensure that sensitive attributes are not predicted by the learned representations.
- Fair representation learning that modifies internal model representations in favor of
Post-Processing Techniques
Fairness may be enhanced after training by changing the model outputs. These strategies comprise:
- Threshold adjustments to various groups to meet conditions of fairness, like equalized odds.
- Calibration techniques such that the estimated probabilities are fair indicators of the actual probabilities in groups. (GeeksforGeeks)
Challenges
Mitigating bias is complex. The statistical bias minimization may at times come at the cost of the model accuracy, and there is a conflict between predictive performance and fairness. The definition of fairness itself is potentially a difficult task because various applications of fairness require various criteria, and various criteria can be conflicting. (MDPI)
Gaining varied and representative data is also a challenge that is experienced because of privacy issues, incomplete records, and a lack of resources. The auditing and reporting done on a continuous basis are needed so that mitigation processes are up to date, as models are continually updated. (E&ICTA)
Why Fairness-Aware Development Matters
The outcomes of the unfair treatment of some groups by AI systems are far-reaching. Discriminatory software in recruitment may support inequality in the workplace. Subjective credit rating may deprive deserving people of opportunities. Unbiased medical forecasts might result in the flawed allocation of medical resources. In both cases, prejudice contravenes the credibility and clouds the greater prospect of AI. (E&ICTA)
Algorithms that are fair and statistical mitigation plans provide a way to create not only powerful AI but also fair and trustworthy AI. They admit that the results of AI systems are social tools whose effects extend across society. Responsible development will necessitate sustained fairness quantification, model adjustment, and upholding human control.
Conclusion
AI bias is not a technical malfunction. It is a mirror of real-world disparities in data and exaggerated by models. Statistical rigor, wise algorithm design, and readiness to address the trade-offs between fairness and performance are required to reduce training data bias. Fairness-conscious algorithms (which can be implemented in pre-processing, in-processing, or post-processing) are useful in delivering more fair results. As AI is taking part in the most crucial decisions, it is necessary to consider fairness at the beginning to have a system that serves the population in a responsible and fair manner.
References
- Understanding Bias in Artificial Intelligence: Challenges, Impacts, and Mitigation Strategies: E&ICTA, IITK
- Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies: JRPS Shodh Sagar
- Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies: MDPI
- Ensuring Fairness in Machine Learning Algorithms: GeeksforGeeks
Bias and Fairness in Machine Learning Models: A Critical Examination of Ethical Implications: IJMRSET - Bias in AI Models: Origins, Impact, and Mitigation Strategies: Preprints
- Bias in Artificial Intelligence and Mitigation Strategies: TCS
- Survey on Machine Learning Biases and Mitigation Techniques: MDPI

Digital vulnerabilities like cyber-attacks and data breaches proliferate rapidly in the hyper-connected world that is created today. These vulnerabilities can compromise sensitive data like personal information, financial data, and intellectual property and can potentially threaten businesses of all sizes and in all sectors. Hence, it has become important to inform all stakeholders about any breach or attack to ensure they can be well-prepared for the consequences of such an incident.
The non-reporting of reporting can result in heavy fines in many parts of the world. Data breaches caused by malicious acts are crimes and need proper investigation. Organisations may face significant penalties for failing to report the event. Failing to report data breach incidents can result in huge financial setbacks and legal complications. To understand why transparency is vital and understanding the regulatory framework that governs data breaches is the first step.
The Current Indian Regulatory Framework on Data Breach Disclosure
A data breach essentially, is the unauthorised processing or accidental disclosure of personal data, which may occur through its acquisition, sharing, use, alteration, destruction, or loss of access. Such incidents can compromise the affected data’s confidentiality, integrity, or availability. In India, the Information Technology Act of 2000 and the Digital Personal Data Protection Act of 2023 are the primary legislation that tackles cybercrimes like data breaches.
- Under the DPDP Act, neither materiality thresholds nor express timelines have been prescribed for the reporting requirement. Data Fiduciaries are required to report incidents of personal data breach, regardless of their sensitivity or impact on the Data Principal.
- The IT (Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013, the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, along with the Cyber Security Directions, under section 70B(6) of the IT Act, 2000, relating to information security practices, procedure, prevention, response and reporting of cyber incidents for Safe & Trusted Internet prescribed in 2022 impose mandatory notification requirements on service providers, intermediaries, data centres and corporate entities, upon the occurrence of certain cybersecurity incidents.
- These laws and regulations obligate companies to report any breach and any incident to regulators such as the CERT-In and the Data Protection Board.
The Consequences of Non-Disclosure
A non-disclosure of a data breach has a manifold of consequences. They are as follows:
- Legal and financial penalties are the immediate consequence of a data breach in India. The DPDP Act prescribes a fine of up to Rs 250 Crore from the affected parties, along with suits of a civil nature and regulatory scrutiny. Non-compliance can also attract action from CERT-In, leading to more reputational damage.
- In the long term, failure to disclose data breaches can erode customer trust as they are less likely to engage with a brand that is deemed unreliable. Investor confidence may potentially waver due to concerns about governance and security, leading to stock price drops or reduced funding opportunities. Brand reputation can be significantly tarnished, and companies may struggle with retaining and attracting customers and employees. This can affect long-term profitability and growth.
- Companies such as BigBasket and Jio in 2020 and Haldiram in 2022 have suffered from data breaches recently. Poor transparency and delay in disclosures led to significant reputational damage, legal scrutiny, and regulatory actions for the companies.
Measures for Improvement: Building Corporate Reputation via Transparency
Transparency is critical when disclosing data breaches. It enhances trust and loyalty for a company when the priority is data privacy for stakeholders. Ensuring transparency mitigates backlash. It demonstrates a company’s willingness to cooperate with authorities. A farsighted approach instils confidence in all stakeholders in showcasing a company's resilience and commitment to governance. These measures can be further improved upon by:
- Offering actionable steps for companies to establish robust data breach policies, including regular audits, prompt notifications, and clear communication strategies.
- Highlighting the importance of cooperation with regulatory bodies and how to ensure compliance with the DPDP Act and other relevant laws.
- Sharing best public communications practices post-breach to manage reputational and legal risks.
Conclusion
Maintaining transparency when a data breach happens is more than a legal obligation. It is a good strategy to retain a corporate reputation. Companies can mitigate the potential risks (legal, financial and reputational) by informing stakeholders and cooperating with regulatory bodies proactively. In an era where digital vulnerabilities are ever-present, clear communication and compliance with data protection laws such as the DPDP Act build trust, enhance corporate governance, and secure long-term business success. Proactive measures, including audits, breach policies, and effective public communication, are critical in reinforcing resilience and fostering stakeholder confidence in the face of cyber threats.
References
- https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf
- https://www.cert-in.org.in/PDF/CERT-In_Directions_70B_28.04.2022.pdf
- https://chawdamrunal.medium.com/the-dark-side-of-covering-up-data-breaches-why-transparency-is-crucial-fe9ed10aac27
- https://www.dlapiperdataprotection.com/index.html?t=breach-notification&c=IN