Advisory for APS School Students
Pretext
The Army Welfare Education Society has informed the Parents and students that a Scam is targeting the Army schools Students. The Scamster approaches the students by faking the voice of a female and a male. The scamster asks for the personal information and photos of the students by telling them they are taking details for the event, which is being organised by the Army welfare education society for the celebration of independence day. The Army welfare education society intimated that Parents to beware of these calls from scammers.
The students of Army Schools of Jammu & Kashmir, Noida, are getting calls from the scamster. The students were asked to share sensitive information. Students across the country are getting calls and WhatsApp messages from two numbers, which end with 1715 and 2167. The Scamster are posing to be teachers and asking for the students’ names on the pretext of adding them to the WhatsApp Groups. The scamster then sends forms links to the WhatsApp groups and asking students to fill out the form to seek more sensitive information.
Do’s
- Do Make sure to verify the caller.
- Do block the caller while finding it suspicious.
- Do be careful while sharing personal Information.
- Do inform the School Authorities while receiving these types of calls and messages posing to be teachers.
- Do Check the legitimacy of any agency and organisation while telling the details
- Do Record Calls asking for personal information.
- Do inform parents about scam calling.
- Do cross-check the caller and ask for crucial information.
- Do make others aware of the scam.
Don’ts
- Don’t answer anonymous calls or unknown calls from anyone.
- Don’t share personal information with anyone.
- Don’t Share OTP with anyone.
- Don’t open suspicious links.
- Don’t fill any forms, asking for personal information
- Don’t confirm your identity until you know the caller.
- Don’t Reply to messages asking for financial information.
- Don’t go to a fake website by following a prompt call.
- Don’t share bank Details and passwords.
- Don’t Make payment over a prompt fake call.
Related Blogs
Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.
Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.
Similar Posts:
Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”
The same viral video was posted on several news media in September 2022.
The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.
Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.
Cyber, is the new weapon today! Cyber Violence is violence in cyber-space that has led to violation of cyber rights of individuals, especially those of children and women. Online violence and harassment have been overlooked laying more emphasis on offline or physical violence.
New Delhi [India], November 12 (ANI/NewsVoir): Cyber, is the new weapon today! Cyber Violence is violence in cyber-space that has led to violation of cyber rights of individuals, especially those of children and women. Online violence and harassment have been overlooked laying more emphasis on offline or physical violence. Cyber violence very often permanently, psychologically impacts the victims and their families. Various forms of threats ranging from morphing, stalking, solicitation of children for sexual purposes, online grooming, have grave consequences on the victims disturbing their mental well-being. Maintaining mental well-being in cyber space is a challenge we wish to promote and advocate for, in order to build responsible netizens.
Together, we stand against violation of cyber rights and strongly believe; it is critical to allow everyone to feel safe online. Netizen’s safety rights must be protected from all kinds of abuse and violence. Setting a mission of ‘Making India Cyber Safe for Children and Women’, Responsible Netism a social purpose organization in association with CyberPeace Foundation, an award-winning Cyber Security think tank working towards bringing CyberPeace in CyberSpace for more than two decades, host its 6th Annual National Conference on Cyber Psychology themed “India Fights Cyber Violence”, scheduled for Saturday, January 22, 2022. Ta advocate on the theme, the campaign #IndiaFightsCyberViolence was launched on November 11, 2021 by Vinay Sahasrabuddhe – President ICCR, Member of Parliament, Priyank Kanoongo – Chairperson, NCPCR and Rekha Sharma, Chairperson NCW at the ICCR Auditorium Delhi. The session was also attended by the CyberPeace Foundation team members.
Vinay Sahasrabuddhe has been a strong advocate of online safety of children, he shared his visionary words and focused on 3 R’s, Research, Reform and Reshape. He recommended extensive research was necessary to strongly voice concerns and remedies based on evidence-based research which would help us reform intervention strategies and the reshape the existing framework to best suit the needs to protect women and children in cyber space. The NCW Chairperson Rekha Sharma shared how critical it is to create awareness about online safety rights of women and reiterated the need for spreading awareness about online safety to reach the last mile in order to build collective action and bring change. She also mentioned the need to conducting nationwide trainings for the police personal to handle and report online distress.
Priyank Kanoongo, the Chairperson of NCPCR has been very proactively advocating for the cause of child online protection and has been instrumental in voicing critical in fiercely voicing his thoughts on protecting online safety rights of children across India. He shared the following thoughts at the launch. He said there is dire need to educate parents about online safety in order to let the information trickle down to their children. He said NCPCR does not hold any inhibitions in naming and shaming violators of child rights be it offline or online and will always raise a strong voice against platform ‘s inability to protect children in cyber space.
Vineet Kumar, Founder and Global President, CyberPeace Foundation, the partnering organization shared that this nationwide movement will build great momentum on the cause of online protection of children and women cross the country and urged organizations across India to pledge their support to the cause. The more people joining this movement would build collective pressure to formulate guidelines and policies the make cyber space safe for children and women. Sonali Patankar – Founder Responsible Netism shared the objective of the campaign was to let online safety reach the last mile and build on aggressive reporting of online content. The movement was an effort to make the campaign India Fights Cyber Violence to make India cyber safe for children.
She shared that the campaign launch would be followed by a nationwide research conducted to understand parents perspectives on cyber violence which would be handy in representing recommendations on women and child safety protocols through commoners. There would be a round table for organizations working with children chaired by Priyank Kanoongo on November 22 followed by a round table held for organizations working with Women chaired by Rekha Sharma Madam on December 22, 2021. The campaign would culminate in the Responsible Netism 6th National Cyber Psychology Conference scheduled for January 22, 2022 that would witness a compilation of the research and the work done throughout the campaign.
The launch was attended by Sujay Patki – Social Activist and Advisor Responsible Netism and Shilpa Chandolikar trustee Responsible Netism, Adv Khushbu Jain Advocate Supreme Court of India followed by the vote of thanks by Unmesh Joshi – Co-founder Responsible Netism. With the success of the launch and the support of NCPCR and NCW, we are sure to make this a nation-wide movement to protect cyber safety rights of netizens and strongly believe in collective action to make India Cyber Safe for Women and Children.
This story is provided by NewsVoir. ANI will not be responsible in any way for the content of this article. (ANI/NewsVoir)(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)
In the vast, uncharted territories of the digital world, a sinister phenomenon is proliferating at an alarming rate. It's a world where artificial intelligence (AI) and human vulnerability intertwine in a disturbing combination, creating a shadowy realm of non-consensual pornography. This is the world of deepfake pornography, a burgeoning industry that is as lucrative as it is unsettling.
According to a recent assessment, at least 100,000 deepfake porn videos are readily available on the internet, with hundreds, if not thousands, being uploaded daily. This staggering statistic prompts a chilling question: what is driving the creation of such a vast number of fakes? Is it merely for amusement, or is there a more sinister motive at play?
Recent Trends and Developments
An investigation by India Today’s Open-Source Intelligence (OSINT) team reveals that deepfake pornography is rapidly morphing into a thriving business. AI enthusiasts, creators, and experts are extending their expertise, investors are injecting money, and even small financial companies to tech giants like Google, VISA, Mastercard, and PayPal are being misused in this dark trade. Synthetic porn has existed for years, but advances in AI and the increasing availability of technology have made it easier—and more profitable—to create and distribute non-consensual sexually explicit material. The 2023 State of Deepfake report by Home Security Heroes reveals a staggering 550% increase in the number of deepfakes compared to 2019.
What’s the Matter with Fakes?
But why should we be concerned about these fakes? The answer lies in the real-world harm they cause. India has already seen cases of extortion carried out by exploiting deepfake technology. An elderly man in UP’s Ghaziabad, for instance, was tricked into paying Rs 74,000 after receiving a deep fake video of a police officer. The situation could have been even more serious if the perpetrators had decided to create deepfake porn of the victim.
The danger is particularly severe for women. The 2023 State of Deepfake Report estimates that at least 98 percent of all deepfakes is porn and 99 percent of its victims are women. A study by Harvard University refrained from using the term “pornography” for creating, sharing, or threatening to create/share sexually explicit images and videos of a person without their consent. “It is abuse and should be understood as such,” it states.
Based on interviews of victims of deepfake porn last year, the study said 63 percent of participants talked about experiences of “sexual deepfake abuse” and reported that their sexual deepfakes had been monetised online. It also found “sexual deepfake abuse to be particularly harmful because of the fluidity and co-occurrence of online offline experiences of abuse, resulting in endless reverberations of abuse in which every aspect of the victim’s life is permanently disrupted”.
Creating deepfake porn is disturbingly easy. There are largely two types of deepfakes: one featuring faces of humans and another featuring computer-generated hyper-realistic faces of non-existing people. The first category is particularly concerning and is created by superimposing faces of real people on existing pornographic images and videos—a task made simple and easy by AI tools.
During the investigation, platforms hosting deepfake porn of stars like Jennifer Lawrence, Emma Stone, Jennifer Aniston, Aishwarya Rai, Rashmika Mandanna to TV actors and influencers like Aanchal Khurana, Ahsaas Channa, and Sonam Bajwa and Anveshi Jain were encountered. It takes a few minutes and as little as Rs 40 for a user to create a high-quality fake porn video of 15 seconds on platforms like FakeApp and FaceSwap.
The Modus Operandi
These platforms brazenly flaunt their business association and hide behind frivolous declarations such as: the content is “meant solely for entertainment” and “not intended to harm or humiliate anyone”. However, the irony of these disclaimers is not lost on anyone, especially when they host thousands of non-consensual deepfake pornography.
As fake porn content and its consumers surge, deepfake porn sites are rushing to forge collaborations with generative AI service providers and have integrated their interfaces for enhanced interoperability. The promise and potential of making quick bucks have given birth to step-by-step guides, video tutorials, and websites that offer tools and programs, recommendations, and ratings.
Nearly 90 per cent of all deepfake porn is hosted by dedicated platforms that charge for long-duration premium fake content and for creating porn—of whoever a user wants, and take requests for celebrities. To encourage them further, they enable creators to monetize their content.
One such website, Civitai, has a system in place that pays “rewards” to creators of AI models that generate “images of real people'', including ordinary people. It also enables users to post AI images, prompts, model data, and LoRA (low-rank adaptation of large language models) files used in generating the images. Model data designed for adult content is gaining great popularity on the platform, and they are not only targeting celebrities. Common people are equally susceptible.
Access to premium fake porn, like any other content, requires payment. But how can a gateway process payment for sexual content that lacks consent? It seems financial institutes and banks are not paying much attention to this legal question. During the investigation, many such websites accepting payments through services like VISA, Mastercard, and Stripe were found.
Those who have failed to register/partner with these fintech giants have found a way out. While some direct users to third-party sites, others use personal PayPal accounts to manually collect money in the personal accounts of their employees/stakeholders, which potentially violates the platform's terms of use that ban the sale of “sexually oriented digital goods or content delivered through a digital medium.”
Among others, the MakeNude.ai web app – which lets users “view any girl without clothing” in “just a single click” – has an interesting method of circumventing restrictions around the sale of non-consensual pornography. The platform has partnered with Ukraine-based Monobank and Dublin’s BetaTransfer Kassa which operates in “high-risk markets”.
BetaTransfer Kassa admits to serving “clients who have already contacted payment aggregators and received a refusal to accept payments, or aggregators stopped payments altogether after the resource was approved or completely freeze your funds”. To make payment processing easy, MakeNude.ai seems to be exploiting the donation ‘jar’ facility of Monobank, which is often used by people to donate money to Ukraine to support it in the war against Russia.
The Indian Scenario
India currently is on its way to design dedicated legislation to address issues arising out of deepfakes. Though existing general laws requiring such platforms to remove offensive content also apply to deepfake porn. However, persecution of the offender and their conviction is extremely difficult for law enforcement agencies as it is a boundaryless crime and sometimes involves several countries in the process.
A victim can register a police complaint under provisions of Section 66E and Section 66D of the IT Act, 2000. Recently enacted Digital Personal Data Protection Act, 2023 aims to protect the digital personal data of users. Recently Union Government issued an advisory to social media intermediaries to identify misinformation and deepfakes. Comprehensive law promised by Union IT minister Ashwini Vaishnav will be able to address these challenges.
Conclusion
In the end, the unsettling dance of AI and human vulnerability continues in the dark web of deepfake pornography. It's a dance that is as disturbing as it is fascinating, a dance that raises questions about the ethical use of technology, the protection of individual rights, and the responsibility of financial institutions. It's a dance that we must all be aware of, for it is a dance that affects us all.
References
- https://www.indiatoday.in/india/story/deepfake-porn-artificial-intelligence-women-fake-photos-2471855-2023-12-04
- https://www.hindustantimes.com/opinion/the-legal-net-to-trap-peddlers-of-deepfakes-101701520933515.html
- https://indianexpress.com/article/opinion/columns/with-deepfakes-getting-better-and-more-alarming-seeing-is-no-longer-believing/