Jump to content

Usr6

Active Members
  • Posts

    1337
  • Joined

  • Last visited

  • Days Won

    89

Usr6 last won the day on March 17 2022

Usr6 had the most liked content!

6 Followers

About Usr6

  • Birthday 01/01/1919

Profile Information

  • Gender
    Male

Converted

  • Interests
    Malware Analysis, Software Testing, Reverse , etc.

Recent Profile Visitors

15172 profile views

Usr6's Achievements

Newbie

Newbie (1/14)

  • Very Popular Rare
  • Dedicated Rare
  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

2.2k

Reputation

  1. Mai traiesti, sir 1337?♠️?

  2. Python is an amazing language with a strong and friendly community of programmers. However, there is a lack of documentation on what to learn after getting the basics of Python down your throat. Through this book I aim to solve this problem. I would give you bits of information about some interesting topics which you can further explore. The topics which are discussed in this book open up your mind towards some nice corners of Python language. This book is an outcome of my desire to have something like this when I was beginning to learn Python. If you are a beginner, intermediate or even an advanced programmer there is something for you in this book. Please note that this book is not a tutorial and does not teach you Python. The topics are not explained in depth, instead only the minimum required information is given. I am sure you are as excited as I am so let’s start! Note: This book is a continuous work in progress. If you find anything which you can further improve (I know you will find a lot of stuff) then kindly submit a pull request! Author I am Muhammad Yasoob Ullah Khalid. I have been programming extensively in Python for over 3 years now. I have been involved in a lot of Open Source projects. I regularly blog about interesting Python topics over at my blog . In 2014 I also spoke at EuroPython which was held in Berlin. It is the biggest Python conference in Europe. If you have an interesting Internship opportunity for me then I would definitely like to hear from you! Table of Contents 1. *args and **kwargs 1.1. Usage of *args 1.2. Usage of **kwargs 1.3. Using *args and **kwargs to call a function 1.4. When to use them? 2. Debugging 3. Generators 3.1. Iterable 3.2. Iterator 3.3. Iteration 3.4. Generators 4. Map, Filter and Reduce 4.1. Map 4.2. Filter 4.3. Reduce 5. set Data Structure 6. Ternary Operators 7. Decorators 7.1. Everything in Python is an object: 7.2. Defining functions within functions: 7.3. Returning functions from within functions: 7.4. Giving a function as an argument to another function: 7.5. Writing your first decorator: 7.6. Decorators with Arguments 8. Global & Return 8.1. Multiple return values 9. Mutation 10. __slots__ Magic 11. Virtual Environment 12. Collections 12.1. defaultdict 12.2. OrderedDict 12.3. counter 12.4. deque 12.5. namedtuple 12.6. enum.Enum (Python 3.4+) 13. Enumerate 14. Object introspection 14.1. dir 14.2. type and id 14.3. inspect module 15. Comprehensions 15.1. list comprehensions 15.2. dict comprehensions 15.3. set comprehensions 16. Exceptions 16.1. Handling multiple exceptions: 17. Lambdas 18. One-Liners 19. For - Else 19.1. else clause: 20. Python C extensions 20.1. CTypes 20.2. SWIG 20.3. Python/C API 21. open Function 22. Targeting Python 2+3 23. Coroutines 24. Function caching 24.1. Python 3.2+ 24.2. Python 2+ 25. Context managers 25.1. Implementing Context Manager as a Class: 25.2. Handling exceptions 25.3. Implementing a Context Manager as a Generator Link: http://book.pythontips.com/en/latest/index.html
  3. Photonic Side Channel Attacks Against RSA Elad Carmon, Jean-Pierre Seifert, Avishai Wool Abstract This paper describes the first attack utilizing the photonic side channel against a public-key crypto-system. We evaluated three common implementations of RSA modular exponentiation, all using the Karatsuba multiplication method. We discovered that the key length had marginal impact onresilience to the attack: attacking a 2048-bit key required only 9% more decryption attempts than a 1024-bit key. We found that the most dominant parameter impacting the attacker’s effort is the minimal block size at which the Karatsuba method reverts to naive multiplication: even for parameter values as low as 32 or 64 bits our attacks achieve 100% success rate with under 10,000 decryption operations. Somewhat surprisingly, we discovered that Montgomery’s Ladder—commonly perceived as the most resilient of the three implementations to side-channel attacks—was actually the most susceptible: for 2048-bit keys, our attack reveals 100% of the secret key bits with as few as 4000 decryptions. Link: https://eprint.iacr.org/2017/108.pdf
  4. Image: byrev / Pixabay If you wanted an exhaustive reference for all the command line tools and utilities available in Windows, "/h" was as good as it got. Well, that was until last month, when Microsoft published a whopping big PDF with information on every single terminal command the operating system has to offer. The document, released on April 18, comes in at 4.6MB and 948 pages and covers the following platforms: Windows Server (Semi-Annual Channel) Windows Server 2016 Windows Server 2012 R2 Windows Server 2012 Windows Server 2008 R2 Windows Server 2008 Windows 10 Windows 8.1 Even though Windows 7 is absent, it's fair to say a lot of the commands should work on the older OS. The reference isn't just limited to commands — it also contains tips for configuring the command prompt window, as well as changes you can make to the Registry to enable and disable features, such as filename / directory completion. Best of all, hyperlinks embedded in the file for each command jump directly to online documentation, so you can always check out the most up-to-date content. This does however raise the question: why have a gigantic reference PDF in the age of online documentation? I guess if you print it out, it makes for good toilet reading material? Windows Commands Reference [Microsoft, via Bleeping Computer] Sursa: https://www.lifehacker.com.au/2018/05/microsoft-publishes-massive-948-page-pdf-with-every-windows-terminal-command-you-could-ever-need/
  5. Syhunt Huntpad is a notepad application with features that are particularly useful to penetration testers and bug hunters - a collection of common injection string generators, hash generators, encoders and decoders, HTML and text manipulation functions, and so on, coupled with syntax highlighting for several programming languages. Huntpad borrows many features from Syhunt Sandcat's QuickInject sidebar. Like its cousin, it is focused on File Inclusion, XSS and SQL Injection and comes with the following options: Syntax Highlighting - supporting HTML, JavaScript, CSS, XML, PHP, Ruby, SQL, Pascal, Perl, Python and VBScript. SQL Injection functions Filter Evasion - Database-Specific String Escape (CHAR & CHR). Conversion of strings to quoted strings, conversion of spaces to comment tags or new lines Filter Evasion (MySQL-Specific) - String Concatenation, Percent Obfuscation & Integer Representation (eg: '26' becomes 'ceil(pi()*pi())*(!!!pi()+true)+ceil(@@version)', a technique presented by Johannes Dahse). UNION Statement Maker Quick insertion of common injections covering DB2, Informix, Ingres, MySQL, MSSQL, Oracle & PostgreSQL File Inclusion functions Quick Shell Upload code generator PHP String Escape (chr) Cross-Site Scripting (XSS) functions Filter Evasion - JavaScript String Escape (String.fromCharCode), CSS Escape Various handy alert statements for testing for XSS vulnerabilities. Hash functions Hash Generators - MD5, SHA-1, SHA-2 (224, 256, 384 & 512), GOST, HAVAL (various), MD2, MD4, RIPEMD (128, 160, 256 & 320), Salsa10, Salsa20, Snefru (128 & 256), Tiger (various) & WHIRLPOOL Encoders/Decoders URL Encoder/Decoder Hex Encoder/Decoder - Converts a string or integer to hexadecimal or vice-versa (multiple output formats supported). Base64 Encoder/Decoder CharCode Converter - Converts a string to charcodes (eg: 'abc' becomes '97,98,99') or vice-versa. IP Obfuscator - Converts an IP to dword, hex or octal. JavaScript Encoders - Such as JJEncode by Yosuke HASEGAWA HTML functions HTML Escape/Unescape HTML Entity Encoder/Decoder - Decimal and hexadecimal HTML entity encoders & decoders JavaScript and CSS beautifiers JavaScript String Escape Text Manipulation functions - Uppercase, Lowercase, Swap Case, Title Case, Reverse, Shuffle, Strip Slashes, Strip Spaces, Add Slashes, Char Separator Time-Based Blind Injection code - Covering MySQL, MSSQL, Oracle, PostgreSQL, Server-Side JavaScript & MongoDB CRC Calculators - CRC16, CRC32, CRC32b, and more. Classical Ciphers - ROT13 & ROT[N] Checksum Calculators - Adler-32 & Fletcher Buffer Overflow String Creator Random String & Number Generation functions URL Splitter Useful Strings - Math, character sets and more. Download: http://www.syhunt.com/en/index.php?n=Products.SyhuntHuntpad
  6. I’m tired of saying, “Be careful, it’s speculative.” Then, “Be careful, it’s gambling.” Then, “Be careful, it’s a bubble.” Okay, I’ll say it: Bitcoin is a scam. In my opinion, it’s a colossal pump-and-dump scheme, the likes of which the world has never seen. In a pump-and-dump game, promoters “pump” up the price of a security creating a speculative frenzy, then “dump” some of their holdings at artificially high prices. And some cryptocurrencies are pure frauds. Ernst & Young estimates that 10 percent of the money raised for initial coin offerings has been stolen. The losers are ill-informed buyers caught up in the spiral of greed. The result is a massive transfer of wealth from ordinary families to internet promoters. And “massive” is a massive understatement — 1,500 different cryptocurrencies now register over $300 billion of “value.” It helps to understand that a bitcoin has no value at all. Promoters claim cryptocurrency is valuable as (1) a means of payment, (2) a store of value and/or (3) a thing in itself. None of these claims are true. 1. Means of Payment. Bitcoins are accepted almost nowhere, and some cryptocurrencies nowhere at all. Even where accepted, a currency whose value can swing 10 percent or more in a single day is useless as a means of payment. 2. Store of Value. Extreme price volatility also makes bitcoin undesirable as a store of value. And the storehouses — the cryptocurrency trading exchanges — are far less reliable and trustworthy than ordinary banks and brokers. 3. Thing in Itself. A bitcoin has no intrinsic value. It only has value if people think other people will buy it for a higher price — the Greater Fool theory. Some cryptocurrencies, like Sweatcoin, which is redeemable for workout gear, are the equivalent of online coupons or frequent flier points — a purpose better served by simple promo codes than complex encryption. Indeed, for the vast majority of uses, bitcoin has no role. Dollars, pounds, euros, yen and renminbi are better means of payment, stores of value and things in themselves. Cryptocurrency is best-suited for one use: Criminal activity. Because transactions can be anonymous — law enforcement cannot easily trace who buys and sells — its use is dominated by illegal endeavors. Most heavy users of bitcoin are criminals, such as Silk Road and WannaCry ransomware. Too many bitcoin exchanges have experienced spectacular heists, such as NiceHash and Coincheck, or outright fraud, such as Mt. Gox and Bitfunder. Way too many Initial Coin Offerings are scams — 418 of the 902 ICOs in 2017 have already failed. Hackers are getting into the act. It’s estimated that 90 percent of all remote hacking is now focused on bitcoin theft by commandeering other people’s computers to mine coins. Even ordinary buyers are flouting the law. Tax law requires that every sale of cryptocurrency be recorded as a capital gain or loss and, of course, most bitcoin sellers fail to do so. The IRS recently ordered one major exchange to produce records of every significant transaction. And yet, a prominent Silicon Valley promoter of bitcoin proclaims that “Bitcoin is going to transform society ... Bitcoin’s been very resilient. It stayed alive during a very difficult time when there was the Silk Road mess, when Mt. Gox stole all that Bitcoin ...” He argues the criminal activity shows that bitcoin is strong. I’d say it shows that bitcoin is used for criminal activity. Bitcoin transactions are sometimes promoted as instant and nearly free, but they’re often relatively slow and expensive. It takes about an hour for a bitcoin transaction to be confirmed, and the bitcoin system is limited to five transactions per second. MasterCard can process 38,000 per second. Transferring $100 from one person to another costs about $6 using a cryptocurrency exchange, and well less than $1 using an electronic check. Bitcoin is absurdly wasteful of natural resources. Because it is so compute-intensive, it takes as much electricity to create a single bitcoin — a process called “mining” — as it does to power an average American household for two years. If bitcoin were used for a large portion of the world’s commerce (which won’t happen), it would consume a very large portion of the world’s electricity, diverting scarce power from useful purposes. In what rational universe could someone simply issue electronic scrip — or just announce that they intend to — and create, out of the blue, billions of dollars of value? It makes no sense. All of this would be a comic sideshow if innocent people weren’t at risk. But ordinary people are investing some of their life savings in cryptocurrency. One stock brokerage is encouraging its customers to purchase bitcoin for their retirement accounts! It’s the job of the SEC and other regulators to protect ordinary investors from misleading and fraudulent schemes. It’s time we gave them the legislative authority to do their job. Sursa: https://www.recode.net/2018/4/24/17275202/bitcoin-scam-cryptocurrency-mining-pump-dump-fraud-ico-value#
  7. As before, the CrackMe is dedicated to malware analysts and to those who want to practice becoming them. That’s why it is not just a set of some abstract riddles, but an exercise that walks through selected tricks that were used in real malware. (Expect some original schemes designed just for this game, too.) Of course, all is demonstrated on harmless examples, but we still recommend you use VM for reversing it so that it will not interfere with any antivirus protection. Rules of the contest There are two CrackMe contests: Capture the flag. The first three submitted flags win. The flag should be submitted along with (minimalistic) notes about the steps taken to find it. (No detailed write-up is required.) Best write-up. The write-up will be judged by its educational value, clarity, and accuracy. The author should show his/her method of solving the CrackMe, as well as their level of understanding of the techniques used. The write-up submission contest closes three weeks after capture the flag. Submissions to both contests should be sent to my Twitter account: @hasherezade. Each of the four winners will get a prize: a book of his/her choice and some Malwarebytes swag. At the end of the contest, I will publish my own solution, made from the point of view of author. All the submitted write-ups will be linked. Asking questions I want the contest to be fair to everyone, so I will not be answering any questions in private. However, if you are stuck, please don’t hesitate to post your question in the comments section of this post, and I will answer as soon as possible. The questions can be also answered by other participants. Giving false clues or teasing beginners will result in a ban—please respect fair play. The application The application is a Windows executable. It was tested on Windows 7 and above. You can download it here. Have fun! Sursa: https://blog.malwarebytes.com/security-world/2018/04/malwarebytes-crackme-2-another-challenge/
  8. My personal challenge for 2016 was to build a simple AI to run my home -- like Jarvis in Iron Man. My goal was to learn about the state of artificial intelligence -- where we're further along than people realize and where we're still a long ways off. These challenges always lead me to learn more than I expected, and this one also gave me a better sense of all the internal technology Facebook engineers get to use, as well as a thorough overview of home automation. So far this year, I've built a simple AI that I can talk to on my phone and computer, that can control my home, including lights, temperature, appliances, music and security, that learns my tastes and patterns, that can learn new words and concepts, and that can even entertain Max. It uses several artificial intelligence techniques, including natural language processing, speech recognition, face recognition, and reinforcement learning, written in Python, PHP and Objective C. In this note, I'll explain what I built and what I learned along the way. Diagram of the systems connected to build Jarvis. Getting Started: Connecting the Home In some ways, this challenge was easier than I expected. In fact, my running challenge (I also set out to run 365 miles in 2016) took more total time. But one aspect that was much more complicated than I expected was simply connecting and communicating with all of the different systems in my home. Before I could build any AI, I first needed to write code to connect these systems, which all speak different languages and protocols. We use a Crestron system with our lights, thermostat and doors, a Sonos system with Spotify for music, a Samsung TV, a Nest cam for Max, and of course my work is connected to Facebook's systems. I had to reverse engineer APIs for some of these to even get to the point where I could issue a command from my computer to turn the lights on or get a song to play. Further, most appliances aren't even connected to the internet yet. It's possible to control some of these using internet-connected power switches that let you turn the power on and off remotely. But often that isn't enough. For example, one thing I learned is it's hard to find a toaster that will let you push the bread down while it's powered off so you can automatically start toasting when the power goes on. I ended up finding an old toaster from the 1950s and rigging it up with a connected switch. Similarly, I found that connecting a food dispenser for Beast or a grey t-shirt cannon would require hardware modifications to work. For assistants like Jarvis to be able to control everything in homes for more people, we need more devices to be connected and the industry needs to develop common APIs and standards for the devices to talk to each other. An example natural language request from command line. Natural Language Once I wrote the code so my computer could control my home, the next step was making it so I could talk to my computer and home the way I'd talk to anyone else. This was a two step process: first I made it so I could communicate using text messages, and later I added the ability to speak and have it translate my speech into text for it to read. It started simple by looking for keywords, like "bedroom", "lights", and "on" to determine I was telling it to turn the lights on in the bedroom. It quickly became clear that it needed to learn synonyms, like that "family room" and "living room" mean the same thing in our home. This meant building a way to teach it new words and concepts. Understanding context is important for any AI. For example, when I tell it to turn the AC up in "my office", that means something completely different from when Priscilla tells it the exact same thing. That one caused some issues! Or, for example, when you ask it to make the lights dimmer or to play a song without specifying a room, it needs to know where you are or it might end up blasting music in Max's room when we really need her to take a nap. Whoops. Music is a more interesting and complex domain for natural language because there are too many artists, songs and albums for a keyword system to handle. The range of things you can ask it is also much greater. Lights can only be turned up or down, but when you say "play X", even subtle variations can mean many different things. Consider these requests related to Adele: "play someone like you", "play someone like adele", and "play some adele". Those sound similar, but each is a completely different category of request. The first plays a specific song, the second recommends an artist, and the third creates a playlist of Adele's best songs. Through a system of positive and negative feedback, an AI can learn these differences. The more context an AI has, the better it can handle open-ended requests. At this point, I mostly just ask Jarvis to "play me some music" and by looking at my past listening patterns, it mostly nails something I'd want to hear. If it gets the mood wrong, I can just tell it, for example, "that's not light, play something light", and it can both learn the classification for that song and adjust immediately. It also knows whether I'm talking to it or Priscilla is, so it can make recommendations based on what we each listen to. In general, I've found we use these more open-ended requests more frequently than more specific asks. No commercial products I know of do this today, and this seems like a big opportunity. Jarvis uses face recognition to let my friends in automatically and let me know. Vision and Face Recognition About one-third of the human brain is dedicated to vision, and there are many important AI problems related to understanding what is happening in images and videos. These problems include tracking (eg is Max awake and moving around in her crib?), object recognition (eg is that Beast or a rug in that room?), and face recognition (eg who is at the door?). Face recognition is a particularly difficult version of object recognition because most people look relatively similar compared to telling apart two random objects -- for example, a sandwich and a house. But Facebook has gotten very good at face recognition for identifying when your friends are in your photos. That expertise is also useful when your friends are at your door and your AI needs to determine whether to let them in. To do this, I installed a few cameras at my door that can capture images from all angles. AI systems today cannot identify people from the back of their heads, so having a few angles ensures we see the person's face. I built a simple server that continuously watches the cameras and runs a two step process: first, it runs face detection to see if any person has come into view, and second, if it finds a face, then it runs face recognition to identify who the person is. Once it identifies the person, it checks a list to confirm I'm expecting that person, and if I am then it will let them in and tell me they're here. This type of visual AI system is useful for a number of things, including knowing when Max is awake so it can start playing music or a Mandarin lesson, or solving the context problem of knowing which room in the house we're in so the AI can correctly respond to context-free requests like "turn the lights on" without providing a location. Like most aspects of this AI, vision is most useful when it informs a broader model of the world, connected with other abilities like knowing who your friends are and how to open the door when they're here. The more context the system has, the smarter is gets overall. I can text Jarvis from anywhere using a Messenger bot. Messenger Bot I programmed Jarvis on my computer, but in order to be useful I wanted to be able to communicate with it from anywhere I happened to be. That meant the communication had to happen through my phone, not a device placed in my home. I started off building a Messenger bot to communicate with Jarvis because it was so much easier than building a separate app. Messenger has a simple framework for building bots, and it automatically handles many things for you -- working across both iOS and Android, supporting text, image and audio content, reliably delivering push notifications, managing identity and permissions for different people, and more. You can learn about the bot framework at messenger.com/platform. I can text anything to my Jarvis bot, and it will instantly be relayed to my Jarvis server and processed. I can also send audio clips and the server can translate them into text and then execute those commands. In the middle of the day, if someone arrives at my home, Jarvis can text me an image and tell me who's there, or it can text me when I need to go do something. One thing that surprised me about my communication with Jarvis is that when I have the choice of either speaking or texting, I text much more than I would have expected. This is for a number of reasons, but mostly it feels less disturbing to people around me. If I'm doing something that relates to them, like playing music for all of us, then speaking feels fine, but most of the time text feels more appropriate. Similarly, when Jarvis communicates with me, I'd much rather receive that over text message than voice. That's because voice can be disruptive and text gives you more control of when you want to look at it. Even when I speak to Jarvis, if I'm using my phone, I often prefer it to text or display its response. This preference for text communication over voice communication fits a pattern we're seeing with Messenger and WhatsApp overall, where the volume of text messaging around the world is growing much faster than the volume of voice communication. This suggests that future AI products cannot be solely focused on voice and will need a private messaging interface as well. Once you're enabling private messaging, it's much better to use a platform like Messenger than to build a new app from scratch. I have always been optimistic about AI bots, but my experience with Jarvis has made me even more optimistic that we'll all communicate with bots like Jarvis in the future. Jarvis uses speech recognition in my iOS app to listen to my request for a fresh t-shirt. Voice and Speech Recognition Even though I think text will be more important for communicating with AIs than people realize, I still think voice will play a very important role too. The most useful aspect of voice is that it's very fast. You don't need to take out your phone, open an app, and start typing -- you just speak. To enable voice for Jarvis, I needed to build a dedicated Jarvis app that could listen continuously to what I say. The Messenger bot is great for many things, but the friction for using speech is way too much. My dedicated Jarvis app lets me put my phone on a desk and just have it listen. I could also put a number of phones with the Jarvis app around my home so I could talk to Jarvis in any room. That seems similar to Amazon's vision with Echo, but in my experience, it's surprising how frequently I want to communicate with Jarvis when I'm not home, so having the phone be the primary interface rather than a home device seems critical. I built the first version of the Jarvis app for iOS and I plan to build an Android version soon too. I hadn't built an iOS app since 2012 and one of my main observations is that the toolchain we've built at Facebook since then for developing these apps and for doing speech recognition is very impressive. Speech recognition systems have improved recently, but no AI system is good enough to understand conversational speech just yet. Speech recognition relies on both listening to what you say and predicting what you will say next, so structured speech is still much easier to understand than unstructured conversation. Another interesting limitation of speech recognition systems -- and machine learning systems more generally -- is that they are more optimized for specific problems than most people realize. For example, understanding a person talking to a computer is subtly different problem from understanding a person talking to another person. If you train a machine learning system on data from Google of people speaking to a search engine, it will perform relatively worse on Facebook at understanding people talking to real people. In the case of Jarvis, training an AI that you'll talk to at close range is also different from training a system you'll talk to from all the way across the room, like Echo. These systems are more specialized than it appears, and that implies we are further off from having general systems than it might seem. On a psychologic level, once you can speak to a system, you attribute more emotional depth to it than a computer you might interact with using text or a graphic interface. One interesting observation is that ever since I built voice into Jarvis, I've also wanted to build in more humor. Part of this is that now it can interact with Max and I want those interactions to be entertaining for her, but part of it is that it now feels like it's present with us. I've taught it fun little games like Priscilla or I can ask it who we should tickle and it will randomly tell our family to all go tickle one of us, Max or Beast. I've also had fun adding classic lines like "I'm sorry, Priscilla. I'm afraid I can't do that." There's a lot more to explore with voice. The AI technology is just getting good enough for this to be the basis of a great product, and it will get much better in the next few years. At the same time, I think the best products like this will be ones you can bring with you anywhere and communicate with privately as well. Facebook Engineering Environment As the CEO of Facebook, I don't get much time to write code in our internal environment. I've never stopped coding, but these days I mostly build personal projects like Jarvis. I expected I'd learn a lot about the state of AI this year, but I didn't realize I would also learn so much about what it's like to be an engineer at Facebook. And it's impressive. My experience of ramping up in the Facebook codebase is probably pretty similar to what most new engineers here go through. I was consistently impressed by how well organized our code is, and how easy it was to find what you're looking for -- whether it's related to face recognition, speech recognition, the Messenger Bot Framework [messenger.com/platform] or iOS development. The open source Nuclide [github.com/facebook/nuclide] packages we've built to work with GitHub's Atom make development much easier. The Buck [buckbuild.com] build system we've developed to build large projects quickly also saved me a lot of time. Our open source FastText [github.com/facebookresearch/fastText] AI text classification tool is also a good one to check out, and if you're interested in AI development, the whole Facebook Research [github.com/facebookresearch] GitHub repo is worth taking a look at. One of our values is "move fast". That means you should be able to come here and build an app faster than you can anywhere else, including on your own. You should be able to come here and use our infra and AI tools to build things it would take you a long time to build on your own. Building internal tools that make engineering more efficient is important to any technology company, but this is something we take especially seriously. So I want to give a shout out to everyone on our infra and tools teams that make this so good. Next Steps Although this challenge is ending, I'm sure I'll continue improving Jarvis since I use it every day and I'm always finding new things I want to add. In the near term, the clearest next steps are building an Android app, setting up Jarvis voice terminals in more rooms around my home, and connecting more appliances. I'd love to have Jarvis control my Big Green Egg and help me cook, but that will take even more serious hacking than rigging up the t-shirt cannon. In the longer term, I'd like to explore teaching Jarvis how to learn new skills itself rather than me having to teach it how to perform specific tasks. If I spent another year on this challenge, I'd focus more on learning how learning works. Finally, over time it would be interesting to find ways to make this available to the world. I considered open sourcing my code, but it's currently too tightly tied to my own home, appliances and network configuration. If I ever build a layer that abstracts more home automation functionality, I may release that. Or, of course, that could be a great foundation to build a new product. Conclusions Building Jarvis was an interesting intellectual challenge, and it gave me direct experience building AI tools in areas that are important for our future. I've previously predicted that within 5-10 years we'll have AI systems that are more accurate than people for each of our senses -- vision, hearing, touch, etc, as well as things like language. It's impressive how powerful the state of the art for these tools is becoming, and this year makes me more confident in my prediction. At the same time, we are still far off from understanding how learning works. Everything I did this year -- natural language, face recognition, speech recognition and so on -- are all variants of the same fundamental pattern recognition techniques. We know how to show a computer many examples of something so it can recognize it accurately, but we still do not know how to take an idea from one domain and apply it to something completely different. To put that in perspective, I spent about 100 hours building Jarvis this year, and now I have a pretty good system that understands me and can do lots of things. But even if I spent 1,000 more hours, I probably wouldn't be able to build a system that could learn completely new skills on its own -- unless I made some fundamental breakthrough in the state of AI along the way. In a way, AI is both closer and farther off than we imagine. AI is closer to being able to do more powerful things than most people expect -- driving cars, curing diseases, discovering planets, understanding media. Those will each have a great impact on the world, but we're still figuring out what real intelligence is. Overall, this was a great challenge. These challenges have a way of teaching me more than I expected at the beginning. This year I thought I'd learn about AI, and I also learned about home automation and Facebook's internal technology too. That's what's so interesting about these challenges. Thanks for following along with this challenge and I'm looking forward to sharing next year's challenge in a few weeks. Sursa: https://www.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/
  9. A team of academics has successfully developed and tested malware that can exfiltrate data from air-gapped computers via power lines. The team —from the Ben-Gurion University of the Negev in Israel— named their data exfiltration technique PowerHammer. PowerHammer works by infecting an air-gapped computer with malware that intentionally alters CPU utilization levels to make the victim's computer consume more or less electrical power. By default, computers extract power from the local network in a uniform manner. A PowerHammer attack produces a variation of the amount of power a victim's PC sucks from the local electrical network. This phenomena is known as a "conducted emission." By altering the high and low power consumption levels, PowerHammer malware can encode binary data from a victim's computer into the power consumption pattern. There are two types of PowerHammer attacks To retrieve this data, an attacker must tap a victim's electrical network so it can read the power consumption variation and decode the binary data hidden inside. Based where the attacker places his tapping rig, two types of PowerHammer attacks exists, with two different exfiltration speeds. The first is "line level power-hammering," and this occurs when the attacker manages to tap the power cable between the air-gapped computer and the electrical socket. The exfiltration speed for a line level hammering is around 1,000 bits/second. The second is "phase level power-hammering," this version of the attack occurs when the intruder taps the power lines at the phase level, in a building's electrical panel. This version of the PowerHammer attack is more stealthy but can recover data at only 10 bits/second, mainly due to greater amount of "noise" at the power line phase level. Attack uses off-the-shelf electrical equipment The tapping device isn't anything super-advanced, being a mundane split-core current transformer that can be attached to any electrical line. This is a non-invasive probe which is clamped around the power line and measures the amount of current passing through it (Fig. 10). The non-invasive probe behaves like an inductor which responds to the magnetic field around a current-carrying cable (Fig. 10 b). The amount of current in the coil is correlated with the amount of current flowing in the conductor. For our experiments we used SparkFun’s split core current transformer ECS1030-L72. The tapping device (probe) is also capable of sending the recorded data to a nearby computer via WiFi, making data collection easier from afar, without the attacker having to physically connect to the tapping probe. Attack works on desktops, servers, IoT devices Experiments revealed the attack is successful for stealing data from air-gapped desktops, laptops, servers, and even IoT devices, but the speed exfiltration speed is slower for the latter. Another observation is that exfiltration speed gets better the more cores a CPU possesses. Mitigations and more details for our technically inclined users are available in the research team's paper, entitled "PowerHammer: Exfiltrating Data from Air-Gapped Computers through Power Lines." It also must be said that this malware is only an experiment and if ever deployed in the wild, such a tool would only be found in the arsenal of intelligence agencies and not something that normal users would see every day. The research center from the Ben-Gurion University of the Negev who came up with this new data exfiltration technique has a long history of innovative —and sometimes weird— hacks, all listed below: LED-it-Go - exfiltrate data from air-gapped systems via an HDD's activity LED SPEAKE(a)R - use headphones to record audio and spy on nearby users 9-1-1 DDoS - launch DDoS attacks that can cripple a US state's 911 emergency systems USBee - make a USB connector's data bus give out electromagnetic emissions that can be used to exfiltrate data AirHopper - use the local GPU card to emit electromagnetic signals to a nearby mobile phone, also used to steal data Fansmitter - steal data from air-gapped PCs using sounds emanated by a computer's GPU fan DiskFiltration - use controlled read/write HDD operations to steal data via sound waves BitWhisper - exfiltrate data from non-networked computers using heat emanations Unnamed attack - uses flatbed scanners to relay commands to malware infested PCs or to exfiltrate data from compromised systems xLED - use router or switch LEDs to exfiltrate data Shattered Trust - using backdoored replacement parts to take over smartphones aIR-Jumper - use security camera infrared capabilities to steal data from air-gapped networks HVACKer - use HVAC systems to control malware on air-gapped systems MAGNETO & ODINI - steal data from Faraday cage-protected systems MOSQUITO - steal data from PCs using speakers and headphones Sursa: https://www.bleepingcomputer.com/news/security/researchers-create-malware-that-steals-data-via-power-lines/
  10. A study funded by DARPA increased the possibility of memory-enhancing brain prosthetics. The animal research done previously showed successful results after which the study was conducted on patients at Wake Forest Baptist Medical Center. The patients there were already having brain implants as a part of their epilepsy treatment. They experienced major improvements in both short-term and long-term memory. The patients were asked to play a memory-related computer game in which they were asked to remember specific things. When the patients were trying to remember those things, the researchers recorded various patterns of neural firing in the brain’s hippocampus area. The hippocampus area of the brain is responsible for the memory. They also paid attention to neural patterns that resulted in the correct memory being encoded. After that, they made the patients play the game again and electrically simulated each patient’s brain by using the encoding patterns studied earlier. They were hoping to use those electrical simulators to trigger more effective memory storage of the data which they have. The method worked successfully and showed results that were better than what the team was expecting. The results on the short-term memory tests jumped by a huge 37% and the long-term memory tests enhanced by 35%. Robert Hampson, the lead author of the study said, “We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient. Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.” The research has opened the door to the memory-enhancing brain implants. These implants might give a button which can be pressed when looking at something to increase the chances of remembering it later. The researchers are looking at this as a potential medical device to help the patients with Alzheimers, stroke or traumatic brain injury patients. The implant will help them re-start the process of forming new memories using their brain’s own activity patterns. The team is also hoping that the technology might be able to assist people in keeping memories which they have encoded already. Hampson says, “In the future, we hope to be able to help people hold onto specific memories, such as where they live or what their grandkids look like when their overall memory begins to fail.” Sursa: http://wonderfulengineering.com/brain-prosthetic-boost-memory-shown-impressive-results-human-trials/
  11. Usr6

    Fun stuff

    Sursa pozei si linkuri catre discutia de pe twitter: https://www.reddit.com/r/sysadmin/comments/8aem4n/tmobile_plaintext_password_data_breach_thought_to/
  12. Over 80 recipes that will take your PHP 7 web development skills to the next level! This is the most up-to-date book in the market on PHP It covers the new features of version 7.x, best practices for server-side programming, and MVC frameworks The recipe-based approach will allow you to explore the unique capabilities that PHP offers to web programmers Link: https://www.packtpub.com/packt/offers/free-learning
  13. Canon has just released this new 3-minute video showing the power of its 120-megapixel CMOS sensor, which it first announced in September 2015 and then showed off at an expo in May 2016. The sensor is called the 120MXS, and it has an ultra-high-resolution of 13280×9184, or about 60 times the resolution of Full HD video. Physically, the sensor is an APS-H sensor (29.22×20.20mm), which falls between full frame (36×24 mm) and APS-C crop (22.5x15mm): “Ultra-high-resolution is made possible by parallel signal processing, which reads signals at high speed from multiple pixels,” Canon says. “All pixel progressive reading of 9.4fps is made possible by 28 digital signal output channels. It is available in RGB or with twice the sensitivity, in monochrome.” When shooting video of the inner workings of a watch, the sensor is able to capture significantly more detail compared to a 1080p camera: You can shoot some ordinary footage with an ordinary lens and then reveal an extraordinary amount of detail simply by digitally zooming into the frame: A video still frame captured with the 120MP sensor. A tiny crop of the still frame captured by the 120MP sensor. For another demonstration of this sensor’s abilities, Canon took it to a rugby match and pitted it against standard Full HD footage captured using the Canon 1D Mark IV. Canon pointed the cameras at a wide view of the fans in the stands: Here’s the difference in detail between the two cameras: Full HD 120MP sensor “Through the further development of CMOS image sensors, Canon seeks to continue breaking new ground in the world of imaging,” Canon says. Still no word on if or when we’ll be seeing this 120MP sensor released in a camera available to consumers. If you’d like one in the future, though, you should probably start stockpiling hard drives: RAW photos shot by the sensor weigh in at 210MB each. (via CanonUSA via CanonWatch) Sursa: https://petapixel.com/2018/03/29/this-is-the-power-of-canons-120mp-camera-sensor/
  14. Sublime has highly customizable build systems that can add to your productivity if you learn how to use them to your advantage. You can define one for your project and whenever you are editing any file, you can run certain commands on the source file and see the output in the sublime console, without leaving the editor. I mostly use IntelliJ for development but still find myself switching to sublime text time to time, depending upon the nature of the project. I mainly use sublime when I have to write some small script or a library, and when I use it I prefer to setup the build system to make it easier to test. In this post I am going to explain how to create one by creating an example build system for a hello-world php application. But the steps should be same for any language. So let’s get started. The first thing that you are going to do is create a new build system. You can do that by going to below path Tools > Build System > New Build System This will open a new file named untitiled.sublime-build. Update the file and put the below content { "cmd": ["php", "$file"], "selector": "source.php", "file_regex": "php$" } Now save this file with the name php.sublime-build. To give you some details about the file content; cmd here means the command that we need to run with the arguments that we want to pass it selector is an optional string used to locate the best builder to use for the current file scope. This is only relevant if Tools > Build System > Automatic is true file_regex specifies the file pattern that our build is going to be working with. After saving the file you can see the build system inside Tools > Build System. Now you can run any php file ending with php as specified above in the above snippet. Now let’s test our build system. Create a new php file and put the below content in it <?php echo “Hello world”; Now let’s run this file that we have created. So select php from Tools > Build Systems and hit CMD + B if you are on Mac, CTRL + B if you are on windows or Linux. Once you run it, you will notice the output for the build in the console, as shown in the image below In case you want to cancel a stuck build, you can do that by pressing CTRL + C if you are on Mac, or Ctrl + Break if you are on windows or linux. You can use the same steps to create a build system for any language. For example, here is how the contents of the build file may look like for a Javascript application { "cmd": ["node", "$file"], "selector": "source.js", "file_regex": "js$" } Hope you enjoyed the article, you can learn more about the build systems in the Sublime Docs, if you have any comments or feedback, leave them down below and feel free to connect with me on twitter or say hi via email. Sursa: https://medium.com/tech-tajawal/build-systems-in-sublime-text-3-9706ab7f44f4
×
×
  • Create New...