According to a shocking report, there are multiple scam loan apps on the App Store in India that charge excessive interest rates and force users to pay by blackmailing and harassing them. Apple has prohibited and removed these apps from the App Store, but they may still be installed on your iPhone and running. You must delete any of these apps if you have downloaded them. Learn the names of these apps and how they operated the fraud.
Why Apple banned these apps?
Apple has taken action to remove certain apps from the Indian App Store. These apps were engaging in unethical behaviour, such as impersonating financial institutions, demanding high fees, and threatening borrowers. Here are the titles of these apps, as well as what Apple has said about their suspension.
Following user concerns, Apple removed six loan apps from the Indian App Store. Loan apps include White Kash, Pocket Kash, Golden Kash, Ok Rupee, and others.
According to multiple user reviews, certain apps seek unjustified access to users’ contact lists and media. These apps also charge exorbitant fees that are not necessitated. Furthermore, companies have been found to engage in unethical tactics such as charging high-interest rates and “processing fees” equal to half the loan amount.
Some lending app users have reported being harassed and threatened for failing to return their loans on time. In some circumstances, the apps threatened the user’s contacts if payment was not completed by the deadline. According to one user, the app company threatened to produce and send false photographs of her to her contacts.
These loan apps were removed from the App Store, according to Apple, because they broke the norms and standards of the Apple Developer Program License Agreement. These apps were discovered to be falsely claiming financial institution connections.
Issue of Fake loan apps on the App Store
The App Store and our App Review Guidelines are designed to ensure we provide our users with the safest experience possible,” Apple explained. “We do not tolerate fraudulent activity on the App Store and have strict rules against apps and developers who attempt to game the system.
In 2022, Apple blocked nearly $2 billion in fraudulent App Store sales. Furthermore, it rejected nearly 1.7 million software submissions that did not match Apple’s quality and safety criteria and cancelled 428,000 developer accounts due to suspected fraudulent activities.
The scammers also used heinous tactics to force the loanees to pay. According to reports, the scammers behind the apps gained access to the user’s contact list as well as their images. They would morph the images and then scare the individual by sharing their fake nude photos with their whole contact list.
Dangerous financial fraud apps have surfaced on the App Store
TechCrunch acquired a user review from one of these apps. “I borrowed an amount in a helpless situation, and a day before the repayment due date, I got some messages with my picture and my contacts in my phone saying that repay your loan or they will inform our contacts that you are not paying the loan,” it said.
Sandhya Ramesh, a journalist from The Print, recently tweeted a screenshot of a direct message she got. A victim’s friend told a similar story in the message.
TechCrunch contacted Apple, who confirmed that the apps had been removed from the App Store for breaking the Apple Developer Program License Agreement and guidelines.
Conclusion
Recently, some users have claimed that some quick-loan applications, such as White Kash, Pocket Kash, and Golden Kash, have appeared on the Top Finance applications chart in recent days. These apps necessitate unauthorised and intrusive access to users’ contact lists and media. According to hundreds of user evaluations, these apps charged exorbitantly high and useless fees. They used unscrupulous techniques such as demanding “processing fees” equal to half the loan amount and charging high-interest rates. Users were also harassed and threatened with restitution. If payments were not made by the due date, the lending applications threatened to notify users’ contacts. According to one user, the app provider even threatened to generate phoney nude images of her and send them to her contacts.
In an era when misinformation spreads like wildfire across the digital landscape, the need for effective strategies to counteract these challenges has grown exponentially in a very short period. Prebunking and Debunking are two approaches for countering the growing spread of misinformation online. Prebunking empowers individuals by teaching them to discern between true and false information and acts as a protective layer that comes into play even before people encounter malicious content. Debunking is the correction of false or misleading claims after exposure, aiming to undo or reverse the effects of a particular piece of misinformation. Debunking includes methods such as fact-checking, algorithmic correction on a platform, social correction by an individual or group of online peers, or fact-checking reports by expert organisations or journalists. An integrated approach which involves both strategies can be effective in countering the rapid spread of misinformation online.
Brief Analysis of Prebunking
Prebunking is a proactive practice that seeks to rebut erroneous information before it spreads. The goal is to train people to critically analyse information and develop ‘cognitive immunity’ so that they are less likely to be misled when they do encounter misinformation.
The Prebunking approach, grounded in Inoculation theory, teaches people to recognise, analyse and avoid manipulation and misleading content so that they build resilience against the same. Inoculation theory, a social psychology framework, suggests that pre-emptively conferring psychological resistance against malicious persuasion attempts can reduce susceptibility to misinformation across cultures. As the term suggests, the MO is to help the mind in the present develop resistance to influence that it may encounter in the future. Just as medical vaccines or inoculations help the body build resistance to future infections by administering weakened doses of the harm agent, inoculation theory seeks to teach people fact from fiction through exposure to examples of weak, dichotomous arguments, manipulation tactics like emotionally charged language, case studies that draw parallels between truths and distortions, and so on. In showing people the difference, inoculation theory teaches them to be on the lookout for misinformation and manipulation even, or especially, when they least expect it.
The core difference between Prebunking and Debunking is that while the former is preventative and seeks to provide a broad-spectrum cover against misinformation, the latter is reactive and focuses on specific instances of misinformation. While Debunking is closely tied to fact-checking, Prebunking is tied to a wider range of specific interventions, some of which increase motivation to be vigilant against misinformation and others increase the ability to engage in vigilance with success.
There is much to be said in favour of the Prebunking approach because these interventions build the capacity to identify misinformation and recognise red flags However, their success in practice may vary. It might be difficult to scale up Prebunking efforts and ensure their reach to a larger audience. Sustainability is critical in ensuring that Prebunking measures maintain their impact over time. Continuous reinforcement and reminders may be required to ensure that individuals retain the skills and information they gained from the Prebunking training activities. Misinformation tactics and strategies are always evolving, so it is critical that Prebunking interventions are also flexible and agile and respond promptly to developing challenges. This may be easier said than done, but with new misinformation and cyber threats developing frequently, it is a challenge that has to be addressed for Prebunking to be a successful long-term solution.
Encouraging people to be actively cautious while interacting with information, acquire critical thinking abilities, and reject the effect of misinformation requires a significant behavioural change over a relatively short period of time. Overcoming ingrained habits and prejudices, and countering a natural reluctance to change is no mean feat. Developing a widespread culture of information literacy requires years of social conditioning and unlearning and may pose a significant challenge to the effectiveness of Prebunking interventions.
Brief Analysis of Debunking
Debunking is a technique for identifying and informing people that certain news items or information are incorrect or misleading. It seeks to lessen the impact of misinformation that has already spread. The most popular kind of Debunking occurs through collaboration between fact-checking organisations and social media businesses. Journalists or other fact-checkers discover inaccurate or misleading material, and social media platforms flag or label it. Debunking is an important strategy for curtailing the spread of misinformation and promoting accuracy in the digital information ecosystem.
Debunking interventions are crucial in combating misinformation. However, there are certain challenges associated with the same. Debunking misinformation entails critically verifying facts and promoting corrected information. However, this is difficult owing to the rising complexity of modern tools used to generate narratives that combine truth and untruth, views and facts. These advanced approaches, which include emotional spectrum elements, deepfakes, audiovisual material, and pervasive trolling, necessitate a sophisticated reaction at all levels: technological, organisational, and cultural.
Furthermore, It is impossible to debunk all misinformation at any given time, which effectively means that it is impossible to protect everyone at all times, which means that at least some innocent netizens will fall victim to manipulation despite our best efforts. Debunking is inherently reactive in nature, addressing misinformation after it has grown extensively. This reactionary method may be less successful than proactive strategies such as Prebunking from the perspective of total harm done. Misinformation producers operate swiftly and unexpectedly, making it difficult for fact-checkers to keep up with the rapid dissemination of erroneous or misleading information. Debunking may need continuous exposure to fact-check to prevent erroneous beliefs from forming, implying that a single Debunking may not be enough to rectify misinformation. Debunking requires time and resources, and it is not possible to disprove every piece of misinformation that circulates at any particular moment. This constraint may cause certain misinformation to go unchecked, perhaps leading to unexpected effects. The misinformation on social media can be quickly spread and may become viral faster than Debunking pieces or articles. This leads to a situation in which misinformation spreads like a virus, while the antidote to debunked facts struggles to catch up.
Prebunking vs Debunking: Comparative Analysis
Prebunking interventions seek to educate people to recognise and reject misinformation before they are exposed to actual manipulation. Prebunking offers tactics for critical examination, lessening the individuals' susceptibility to misinformation in a variety of contexts. On the other hand, Debunking interventions involve correcting specific false claims after they have been circulated. While Debunking can address individual instances of misinformation, its impact on reducing overall reliance on misinformation may be limited by the reactive nature of the approach.
CyberPeace Policy Recommendations for Tech/Social Media Platforms
With the rising threat of online misinformation, tech/social media platforms can adopt an integrated strategy that includes both Prebunking and Debunking initiatives to be deployed and supported on all platforms to empower users to recognise the manipulative messaging through Prebunking and be aware of the accuracy of misinformation through Debunking interventions.
Gamified Inoculation: Tech/social media companies can encourage gamified inoculation campaigns, which is a competence-oriented approach to Prebunking misinformation. This can be effective in helping people immunise the receiver against subsequent exposures. It can empower people to build competencies to detect misinformation through gamified interventions.
Promotion of Prebunking and Debunking Campaigns through Algorithm Mechanisms:Tech/social media platforms may promote and guarantee that algorithms prioritise the distribution of Prebunking materials to users, boosting educational content that strengthens resistance to misinformation. Platform operators should incorporate algorithms that prioritise the visibility of Debunking content in order to combat the spread of erroneous information and deliver proper corrections; this can eventually address and aid in Prebunking and Debunking methods to reach a bigger or targeted audience.
User Empowerment to Counter Misinformation:Tech/social media platforms can design user-friendly interfaces that allow people to access Prebunking materials, quizzes, and instructional information to help them improve their critical thinking abilities. Furthermore, they can incorporate simple reporting tools for flagging misinformation, as well as links to fact-checking resources and corrections.
Partnership with Fact-Checking/Expert Organizations:Tech/social media platforms can facilitate Prebunking and Debunking initiatives/campaigns by collaborating with fact-checking/expert organisations and promoting such initiatives at a larger scale and ultimately fighting misinformation with joint hands initiatives.
Conclusion
The threat of online misinformation is only growing with every passing day and so, deploying effective countermeasures is essential. Prebunking and Debunking are the two such interventions. To sum up: Prebunking interventions try to increase resilience to misinformation, proactively lowering susceptibility to erroneous or misleading information and addressing broader patterns of misinformation consumption, while Debunking is effective in correcting a particular piece of misinformation and having a targeted impact on belief in individual false claims. An integrated approach involving both the methods and joint initiatives by tech/social media platforms and expert organizations can ultimately help in fighting the rising tide of online misinformation and establishing a resilient online information landscape.
Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, announced that rules for the Digital Personal Data Protection (DPDP) Act are expected to be released by the end of January. The rules will be subject to a month-long consultation process, but their notification may be delayed until after the general elections in April-May 2024. Chandrasekhar mentioned changes to the current IT regulations would be made in the next few days to address the problem of deepfakes on social networking sites.
The government has observed a varied response from platforms regarding advisory measures on deepfakes, leading to the decision to enforce more specific rules. During the Digital India Dialogue, platforms were made aware of existing provisions and the consequences of non-compliance. An advisory was issued, and new amended IT rules will be released if satisfaction with compliance is not achieved.
When Sachin Tendulkar reported a deepfake on a site where he was seen endorsing a gaming application, it raised concerns about the exploitation of deepfakes. Tendulkar urged the reporting of such incidents and underlined the need for social media companies to be watchful, receptive to grievances, and quick to address disinformation and deepfakes.
The DPDP Act, 2023
The Digital Personal Data Protection Act (DPDP) 2023 is a brand-new framework for digital personal data protection that aims to protect individuals' digital personal data. The act ensures compliance by the platforms collecting personal data. The act aims to provide consent-based data collection techniques. DPDP Act 2023 is an important step toward protecting individual privacy. The Act, which requires express consent for the acquisition, administration, and processing of personal data, seeks to guarantee that organisations follow the stated objective for which user consent was granted. This proactive strategy coincides with global data protection trends and demonstrates India's commitment to safeguarding user information in the digital era.
Amendments to IT rules
Minister Chandrasekhar declared that existing IT regulations would be amended in order to combat the rising problem of deepfakes and disinformation on social media platforms. These adjustments, which will be published over the next few days, are primarily aimed at countering widespread of false information and deepfake. The decision follows a range of responses from platforms to deepfake recommendations made during Digital India Dialogues.
The government's stance: blocking non-compliant platforms
Minister Chandrasekhar reaffirmed the government's commitment to enforcing the updated guidelines. If platforms fail to follow compliance, the government may consider banning them. This severe position demonstrates the government's commitment to safeguarding Indian residents from the possible harm caused by false information.
Empowering Users with Education and Awareness
In addition to the upcoming DPDP Act Rules/recommendations and IT regulation changes, the government recognises the critical role that user education plays in establishing a robust digital environment. Minister Rajeev Chandrasekhar emphasised the necessity for comprehensive awareness programs to educate individuals about their digital rights and the need to protect personal information.
These instructional programs seek to equip users to make informed decisions about giving consent to their data. By developing a culture of digital literacy, the government hopes to guarantee that citizens have the information to safeguard themselves in an increasingly linked digital environment.
Balancing Innovation with User Protection
As India continues to explore its digital frontier, the junction of technology innovation and user safety remains a difficult balance. The upcoming Rules on the DPDP Act and modifications to existing IT rules represent the government's proactive efforts to build a strong framework that supports innovation while protecting user privacy and combating disinformation. Recognising the changing nature of the digital world, the government is actively participating in continuing discussions with stakeholders such as industry professionals, academia, and civil society. These conversations promote a collaborative approach to policy creation, ensuring that legislation is adaptable to the changing nature of cyber risks and technology breakthroughs. Such inclusive talks demonstrate the government's dedication to transparent and participatory governance, in which many viewpoints contribute to the creation of effective and nuanced policy. These advances reflect an important milestone in India's digital journey, as the country prepares to set a good example by creating responsible and safe digital ecosystems for its residents.
Recently, a viral social media post alleged that the Delhi Metro Rail Corporation Ltd. (DMRC) had increased ticket prices following the BJP’s victory in the Delhi Legislative Assembly elections. After thorough research and verification, we have found this claim to be misleading and entirely baseless. Authorities have asserted that no fare hike has been declared.
Claim:
Viral social media posts have claimed that the Delhi Metro Rail Corporation Ltd. (DMRC) increased metro fares following the BJP's victory in the Delhi Legislative Assembly elections.
After thorough research, we conclude that the claims regarding a fare hike by the Delhi Metro Rail Corporation Ltd. (DMRC) following the BJP’s victory in the Delhi Legislative Assembly elections are misleading. Our review of DMRC’s official website and social media handles found no mention of any fare increase.Furthermore, the official X (formerly Twitter) handle of DMRC has also clarified that no such price hike has been announced. We urge the public to rely on verified sources for accurate information and refrain from spreading misinformation.
Conclusion:
Upon examining the alleged fare hike, it is evident that the increase pertains to Bengaluru, not Delhi. To verify this, we reviewed the official website of Bangalore Metro Rail Corporation Limited (BMRCL) and cross-checked the information with appropriate evidence, including relevant images. Our findings confirm that no fare hike has been announced by the Delhi Metro Rail Corporation Ltd. (DMRC).
Your institution or organization can partner with us in any one of our initiatives or policy research activities and complement the region-specific resources and talent we need.