Fake accounts, job scams, and how LinkedIn is leveraging AI technologies to keep its users safe and secure. Read on to discover how LinkedIn is tackling these challenges head-on and ensuring a safe and supportive professional networking environment for all.
Updated Apr 30, 2023 | 03:47 PM IST
Oscar Rodriguez, Vice President, Product Management, LinkedIn
- LinkedIn unleashes AI and deep learning to combat job scams and fake profiles.
- LinkedIn goes all-in on skills-first approach, helping professionals upskill for the future.
- Women’s safety on LinkedIn: No more romantic advances, meme-sharing, or unprofessional behavior.
Siddharth: How the increasing sophistication of internet/job scams require a strong, robust tech ecosystem in place
Oscar: It’s true that job scams have become more sophisticated and harder to detect. Fraudsters use various tactics, such as fake job postings, phishing emails, and social engineering, to trick people into giving away their personal information or money.
At LinkedIn, we recognize the importance of a strong, robust tech ecosystem to combat these scams. We use sophisticated tech like AI, machine learning, and a new deep-learning-based model, paired with teams of experts, to proactively detect probable harmful behaviour, fake profiles, which helps us take quick action to protect our members.
We also know that relying solely on technology is not enough. LinkedIn’s policies do not tolerate inappropriate activities or behaviour such as spam, harassment, and scams. We also believe in the importance of educating people about how to identify potential threats, what can go wrong, and how to report suspicious activities. By working together, we can create a safer and more secure online environment for all.
Siddharth: With Text to Image Ai, and LLM models, impersonating someone and making a perfect LinkedIn profile is much easier now. What is LinkedIn doing to stop fake accounts duping job seekers of their time and money?
Oscar: We are investing in our teams, tools and technology to mitigate the risk of fake accounts. This includes strengthening our defences at every touchpoint, beginning with improving our registration defences and challenges – which allows us to prevent bad actors from creating fake accounts at scale. We’re also enhancing our current defence models, making them faster and more effective, and also improving our in-product messaging and warnings to members so they can also help keep themselves safe.
Last week we announced new, free verification features for members including the ability for some members to verify their work email addresses. While this isn’t yet available to all members, we are already working to expand availability and unlock new ways to verify.
We’ve also created a deep learning model to efficiently catch profiles made with AI-based synthetic image generation technology. AI-based image generators can create numerous unique, high-quality profile photos that do not correspond to real people. Fake accounts sometimes use these convincing, AI-generated profile photos to make their fake LinkedIn profile appear more authentic.
Siddharth: Skilling and re-skilling is a major part of everyone’s career. What is linkedIn’s roadmap?
Oscar: One of our enduring and longstanding goals is to build a skills-first labour market. We believe that skills are the currency of the future and are consistently making strides in that direction through research, partnerships, as well as product-level developments.
Over the past few years, our product updates for working professionals have been designed keeping this commitment in mind. In March 2021, our CEO Roslansky introduced the Skills Graph through a blog that used technologies such as machine learning and natural language processing to map the connections between 875 million people, 59 million organisations, and 39,000 skills. Since then, we’ve used our technology to propel members to think from a skills-first approach. In LinkedIn Recruiter, we nudge recruiters to help expand their searches across gender and other markers by looking for different kinds of skills. As for aspiring jobseekers on our platform — we inform them when they have a high skill overlap with a job role, bringing them closer to relevant and meaningful opportunities.
Siddharth: LinkedIn is the world’s largest professional networking platform, however, users do tend to cross the line and start making it like Facebook, reaching out to female job seekers, posting memes, etc. What is LinkedIn doing about this?
Oscar: We work hard to make the LinkedIn community safe for every member. Our community guidelines don’t allow romantic advances on the platform, and we reinforce these firmly to keep members, especially women, safe from inappropriate experiences. We have updated our automated tools to detect potentially harmful behaviour and warn members when engaging via private messaging.
If a potentially harmful message is detected, it will either go directly to your spam folder or be hidden with a warning. If either happens, we’ve also made it easier for members to report it and for our teams to take action. With an optional advanced safety feature that, when enabled, displays warnings on LinkedIn messages that are suspected of containing high-risk or potentially harmful content, we’ve provided our members with an extra layer of security on the platform.
Siddharth: How is LinkedIn investing in new AI technologies such as advanced network, computer vision and natural language processing algorithms for anomaly detection with fake profiles and communities?
Oscar: While conversations about emerging tech have recently surfaced, AI is not new to us. For over 15 years, LinkedIn has used emerging tech such as AI to help enhance our members’ experience on the platform. Our automated systems pair AI with teams of experts to stop the vast majority of detected fake accounts before they appear in our community. You’ll see in our transparency report that 96% of detected fake accounts and 99.1% of detected spam and scams are caught and removed by our automated defences.
We recently introduced a new deep-learning-based model which proactively checks profile photo uploads to determine if the image is AI-generated. This model uses cutting-edge technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric analyses. This technology will enhance our automated anti-abuse defences to help detect and remove fake accounts before they reach our members.
Siddharth: In light of increasing privacy concerns, how is LinkedIn working to provide its users with more control over their data and the information they share on the platform?
Oscar: Our users’ safety and privacy is integral to our platform’s values and we’ve realised that we can grant more control to our members by offering them two key tools:
The first is to provide them privacy options easily customisable to their requirements. Our privacy settings empower members to change how they share their data according to where they are in their career journeys. We work to make that process simple and straightforward.
Second, we are constantly educating our members on how they can stay safe on the platform. For instance, we encourage them to only connect with people they know and trust, we recommend following people instead of connecting if they don’t know them, and we strongly encourage members to not share any sensitive personal information on the platform. In return, we rely on members to inform us of their concerns so we can help improve their sense of security on the platform.
Siddharth: Social engineering attacks, such as phishing, have been on the rise. What measures is LinkedIn taking to educate its users on these threats and to detect and prevent such attacks on the platform?
Oscar: Technological developments have provided scammers with several ways to deceive people, and phishing is one of them. We are continuously working on educating members about the necessary steps they should take to stay safe from phishing. We encourage members to protect themselves by enabling two-factor authentication and to report suspicious messages to us. Our internal teams relentlessly work to take action against those who attempt to harm LinkedIn members through phishing. Within Linkedin Recruiter, companies caught violating our terms and conditions are suspended as a step to keep job seekers on the platform safe.
Siddharth: How educating and collaborating with members can help to tackle job scams – tips on how to identify red flags and avoid suspicious jobs.
Oscar: A vital part of building our platform is hearing from our members. We are constantly seeking their feedback, keeping them informed of how we are working to keep LinkedIn safe, and educating them on how to identify and dodge misleading behaviour.
We encourage our members to utilise the tools at their disposal – such as LinkedIn Verifications, prompts on out-of-network connection requests, and suspicious message warnings – to stay safe while connecting and conversing with other members. LinkedIn’s Get Hired newsletter provides timely job search advice and shows which companies are currently hiring and the Top Voices to follow to stay in the know.
We are also working with career experts and influencers like Elizabeth Houghton, Sakshi Chandraakar and Eric Sim to provide guidance to help members identify, avoid, and report suspicious jobs:
- If someone asks for payment, that’s a red flag; you should never give out your credit card or private identity information.
- Beware of postings that claim high pay for little work.
- A job offer after just one remote interview is rarely a legitimate deal.
- If you see something that doesn’t look right, please report it to us so we can investigate.