Jump to content

Search the Community

Showing results for tags 'nginx'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Informatii generale
    • Anunturi importante
    • Bine ai venit
    • Proiecte RST
  • Sectiunea tehnica
    • Exploituri
    • Challenges (CTF)
    • Bug Bounty
    • Programare
    • Securitate web
    • Reverse engineering & exploit development
    • Mobile security
    • Sisteme de operare si discutii hardware
    • Electronica
    • Wireless Pentesting
    • Black SEO & monetizare
  • Tutoriale
    • Tutoriale in romana
    • Tutoriale in engleza
    • Tutoriale video
  • Programe
    • Programe hacking
    • Programe securitate
    • Programe utile
    • Free stuff
  • Discutii generale
    • RST Market
    • Off-topic
    • Discutii incepatori
    • Stiri securitate
    • Linkuri
    • Cosul de gunoi
  • Club Test's Topics
  • Clubul saraciei absolute's Topics
  • Chernobyl Hackers's Topics
  • Programming & Fun's Jokes / Funny pictures (programming related!)
  • Programming & Fun's Programming
  • Programming & Fun's Programming challenges
  • Bani pă net's Topics
  • Cumparaturi online's Topics
  • Web Development's Forum
  • 3D Print's Topics

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Yahoo


Jabber


Skype


Location


Interests


Biography


Location


Interests


Occupation

Found 4 results

  1. Ideea de a folosi nginx cu ssl ca frontend pentru apache cred ca este foarte buna din urmatoarele motive: - Se comporta ca un tcp offloader oferind ceva protectie extra pentru webserver (in caz de atacuri http(s)) - Scade timpul de acces pe site (spre diferenta de apache simplu cu SSL) vhost config server { listen 188.240.88.4:443; server_name rstcenter.com www.rstcenter.com; keepalive_timeout 60; ssl on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers aRSA:!eNULL:!EXP:!LOW:-RC4:-3DES:!SEED:!MD5:!kPSK:!kSRP:-kRSA:@STRENGTH:AES128-SHA:DES-CBC3-SHA:RC4-SHA; ssl_prefer_server_ciphers on; ssl_session_cache shared:TLSSL:30m; ssl_session_timeout 10m; ssl_certificate /etc/nginx/ssl/rstcenter.com.combined.crt; ssl_certificate_key /etc/nginx/ssl/rstcenter.com.key; more_set_headers "X-Secure-Connection: true"; add_header Strict-Transport-Security max-age=3456000; location / { proxy_pass http://127.0.0.1:1234; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; } Un nginx.conf se poate vedea aici (nu este cel default) user www-data; worker_processes 4; worker_priority -1; pid /var/run/nginx.pid; worker_rlimit_nofile 640000; worker_cpu_affinity 0001 0010 0100 1000; events { worker_connections 64000; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 20; keepalive_requests 10000; types_hash_max_size 2048; client_max_body_size 128M; client_body_buffer_size 128k; connection_pool_size 8192; request_pool_size 8k; server_names_hash_bucket_size 2048; server_tokens off; resolver 127.0.0.1; resolver_timeout 2s; reset_timedout_connection on; more_set_headers "Server: Apache"; more_set_headers "X-XSS-Protection: 1; mode=block"; more_set_headers "X-Frame-Options: sameorigin"; more_set_headers "X-Content-Type-Options: nosniff"; open_file_cache max=147000 inactive=30s; open_file_cache_valid 60s; open_file_cache_min_uses 2; open_file_cache_errors on; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_static on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_min_length 500; gzip_http_version 1.0; gzip_types text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript text/plain; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Note: - Nginx este instalat pe Debian (pachetul este 'nginx-extras') - Apache il rulez listat pe 127.0.0.1 port 1234 - Certificatul site-ului (CRT-ul) este concatenat din crt-ul domeniului + certificatul intermediar
  2. There is special type of DDoS attacks, application level DDoS, which is quite hard to combat against. Analyzing logic which filters this type of DDoS attack must operate on HTTP message level. So in most cases the logic is implemented as custom modules for application layer (usually nowadays user space) HTTP accelerators. And surely Nginx is the most widespread platform for such solutions. However, common HTTP servers and reverse proxies were not designed for DDoS mitigation- they are simply wrong tools for this issue. One of the reason is that they are too slow to combat with massive traffic (see my recent paper and presentation for other reasons). If logging is switched off and all content is in cache, then HTTP parser becomes the hottest spot. Simplified output of perf for Nginx under simple DoS is shown below (Nginx’s calls begin with ’ngx’ prefix, memcpy and recv are standard GLIBC calls): % symbol name 1.5719 ngx_http_parse_header_line 1.0303 ngx_vslprintf 0.6401 memcpy 0.5807 recv 0.5156 ngx_linux_sendfile_chain 0.4990 ngx_http_limit_req_handler The next hot spots are linked to complicated application logic (ngx vslprintf ) and I/O. During Tempesta FW development We have studied several HTTP servers and proxies (Nginx, Apache Traffic Server, Cherokee, node.js, Varnish and userver) and learned that all of them use switch and/or if-else driven state machines. The problem with the approach is that HTTP parsing code is comparable in size with L1i cache and processes one character at a time with significant number of branches. Modern compilers optimize large switch statements to lookup tables that minimizes number of conditional jumps, but branch misprediction and instruction cache misses still hurt performance of the state machine. So the method probably has poor performance. The other well-known approach is table-driven automaton. However, simple HTTP parser can have more than 200 states and 72 alphabet cardinality. That gives 200 x 72 = 14400 bytes for the table, which is about half of L1d of modern microprocessors. So the approach is also could be considered as inefficient due to high memory consumption. The first obvious alternative for the state machine is to use Hybrid State Machine (HSM) described in our paper, which combines very small table with also small switch statement. In our case we tried to encode outgoing transitions from a state with at most 4 ranges. If the state has more outgoing transitions, then all transitions over that 4 must be encoded in switch. All actions (like storing HTTP header names and values) must be performed in switch. Using this technique we can encode each state with only 16 bytes, i.e. one cache line can contain 4 states. Giving this the approach should have significantly improve data cache hit. We also know that Ragel generates perfect automatons and combines case labels in switch statement with direct goto labels (it seems switch is used to be able to enter FSM from any state, i.e. to be able to process chunked data). Such automatons has lower number of loop cycle and bit faster than traditional a-loop-cycle-for-each-transition approach. There was successful attempt to generate simple HTTP parsers using Ragel, but the parsers are limited in functionality. However there are also several research papers which says that an automaton states is just auxiliary information and an automaton can be significantly accelerated if state information is declined. So the second interesting opportunity to generate the fastest HTTP parser is just to encode the automaton directly using simple goto statements, ever w/o any explicit loop. Basically HTTP parsers just matches a string against set of characters (e.g. [A-Za-z_-] for header names), what strspn(3) does. SSE 4.2 provides PCMPSTR instructions family for this purpose (GLIBC since 2.16 uses SSE 4.2 implemenetation for strspn()). However, this is vector instruction which doesn't support accept or reject sets more than 16 characters, so it's not too usable for HTTP parsers. Results I made a simple benchmark for four approaches described above (http_ngx.c - Nginx HTTP parsing routines, http_table.c - table-driven FSM, http_hsm.c - hybrid state machine and http_goto.c - simple goto-driven FSM). And here are the results (routines with 'opt' or 'lw' - are optimized or lightweight versions of functions): Haswell (i7-4650U) Nginx HTTP parser: ngx_request_line: 730ms ngx_header_line: 422ms ngx_lw_header_line: 428ms ngx_big_header_line: 1725ms HTTP Hybrid State Machine: hsm_header_line: 553ms Table-driven Automaton (DPI) tbl_header_line: 473ms tbl_big_header_line: 840ms Goto-driven Automaton: goto_request_line: 470ms goto_opt_request_line: 458ms goto_header_line: 237ms goto_big_header_line: 589ms Core (Xeon E5335) Nginx HTTP parser: ngx_request_line: 909ms ngx_header_line: 583ms ngx_lw_header_line: 661ms ngx_big_header_line: 1938ms HTTP Hybrid State Machine: hsm_header_line: 433ms Table-driven Automaton (DPI) tbl_header_line: 562ms tbl_big_header_line: 1570ms Goto-driven Automaton: goto_request_line: 747ms goto_opt_request_line: 736ms goto_header_line: 375ms goto_big_header_line: 975ms Goto-driven automaton shows the better performance in all the tests on both the architectures. Also it's much easier to implement in comparison with HSM. So in Tempesta FW we migrated from HSM to goto-driven atomaton, but with some additional optimizations. Lessons Learned ** Haswell has very good BPU ** Core micro-architecture has show that HSM behaves much better than switch-driven and table-driven automatons. While this is not the case for Haswell - the approach loses to both the approaches. I've tried many optimizations techniques to improve HSM performance, but the results above are the best and they still worse than the simple FSM approaches. Profiler shows that the problem (hot spot) in HSM on Haswell is in the following code if (likely((unsigned char)(c - RNG_CB(s, 0)) <= RNG_SUB(s, 0))) { st = RNG_ST(s, 0); continue; } Here we extract transition information and compare current character with the range. In most cases only this one branch is observer in the test. 3rd and 4th branches are never observed. The whole automaton was encoded with only 2 cache lines. In first test case, when XTrans.x structure is dereferenced to get access to the ranges, the compiler generates 3 pointer dereferences. In fact these instructions (part of the disassembled branch) sub 0x4010c4(%rax),%bl cmp 0x4010c5(%rax),%bl movzbl 0x4010cc(%rax),%eax produce 3 accesses to L1d and the cache has very limited bandwidth (64 bytes for reading and 32 bytes for writing) on each cycle with minimal latency as 4 cycles for Haswell. While the only one cache line is accessed by all the instructions. So the test case bottle neck is L1d bandwidth. If we use XTrans.l longs (we need only l[0], which can be loaded with only one L1d access, in all the cases) and use bitwise operations to extract the data, then we get lower number of L1d accesses (4G vs 6.7G for previous cases), but branch mispredictions are increased. The problem is that more complex statement in the conditions makes harder to Branch Prediction Unit to predict branches. However, we can see that simple branches (for switch-driven and goto-driven automatons) show perfect performance on Haswell. So advanced Haswell BPU perfectly processes simple automatons making complex HSM inadequate. In fact HSM is only test which is slower on Haswell in comparison with Core Xeon. Probably, this is the difference between server and mobile chips that ever old server processor beats modern mobile CPU on complex loads... -O3 is ambiguous Sometimes -O3 (GCC 4.8.2) generates slower code than -O2. Also benchmarks for -O3 show very strange and unexpected results. For example the below are results for -O2: goto_request_line: 470ms However, -O3 shows worse results: goto_request_line: 852ms Automata must be encoded statically whenever possible Table-driven and HSM automaton are encoded using static constant tables (in difference with run-time generated tables for current DPI parser). This was done during HSM optimizations. Sometimes compiler can't optimize code using run-time generated tables. And this is crucial for real hot spots (for HSM the table is used in the if-statement described above which gets about 50-70% of whole the function execution time) - after the moving to the static data the code can get up to 50% performance improvement (the case for HSM). Source: High Performance Linux: Fast Finite State Machine for HTTP Parsing Refs: - Tempesta FW is a hybrid solution which combines reverse proxy and firewall at the same time. It accelerates Web applications and provide high performance framework with access to all network layers for running complex network traffic classification and blocking modules - http://natsys-lab.com/tpl/tempesta_fw.pdf
  3. Just a little note to announce that we released NAXSI, an Open Source, Positive Model Web Applicative Firewall for NGINX. Naxsi is now also an official OWASP project (yeepee !) Why ? Because, out there, first of all, there is not much open source WAFs, secondly, even if mod_security is awesome, we wanted something different, that is more reverse proxy oriented. And last but not least, as a security enthusiast, I’m not found of negative model when it comes to applicative firewalling, as js/html/*sql languages are so rich that it’s very hard to have a 100% coverage of possible injection vectors. You may find some examples here : ModSecurity SQL Injection Challenge: Lessons Learned - SpiderLabs Anterior (results of the mod_security bypass context). To make it short, a negative model requires a LOT of efforts to maintain a core rule set (and we’re far from being able to do what the mod security project has done). So, we are left with proprietary appliances, and as a hoster (more than 1.000 websites currently hosted), proprietary appliances are not even an option. This is why we decided to create NAXSI. How ? Well, positive model can be fairly complicated/long to configure when you have a huge web-site, or a web-site that allows a lot of rich/complex user inputs. So, we designed NAXSI to be as flexible and easy to configure as possible. So, here is a global overview of how it works : 1. NAXSI does not have ‘rules’, strictly talking. It will just “score” strange characters in user contents. When the request reaches a critical score, the request will be denied. 2. The learning mode heavily relies on NGINX’s power. When in a learning mode, all to-be-denied requests will be allowed, AND, posted back to a specific location (in NGINX’s term) pointing to a script that will analyze the request and generate the appropriate white-lists, write them to naxsi’s configuration file and reload NGINX. (Thanks to NGINX design, current connection’s won’t be closed, so it’s 100% invisible for the end-user) 3. Once you are in a “production” state (no more learning mode, NAXSI is indeed blocking the requests), all denied requests will be redirected to a specific location, where you can : 4. Depending on the user’s IP, turn it into learning mode (for some Ips, naxsi will always be in learning mode, and generate white-lists on the fly) 5. If the user’s thinks it’s a false positive, he can fill a captcha. If he decided to do so, a mail will be sent, with the associated generated white-lists and detailed request (full HTTP request, so that it can be reproduced) 6. Very simple rules syntax, allowing (for extreme cases) easy hand tuned white-list or negative rules writing. As you can see, we tried to make this as easy as possible to configure and use. During configuration, the user should never have to edit NAXSI’s white-list configuration by hand, as it’s 100% automatically generated via learning mode. You can even partially perform this part with a crawler (if yours is good enough). You can find more details on the googlecode’s page of the project : naxsi.googlecode.com. What ? Naxsi, thanks to NGINX power, can do pretty much whatever you want : turn on learning mode for some users only, redirect forbidden requests to another domain, a vhost, a single page. For those of you who have some knowledge about NGINX, you know how right I am, for the other’s, have a look at NGINX, it’s pure awesomeness ! When ? Naxsi is currently released on an “alpha” status, but we are already deploying in on various production sites. For those whishing to try naxsi, I ‘really’ recommand that you use the SVN to fetch last sources, as packaging is not done on a regular basis right now. Test ? We have setup a test box (referenced on naxsi’s wiki, here : OnlyTrustWhatYouCanTest - naxsi - Naxsi is an open source, high performance, low rules maintenance, Web Application Firewall module for Nginx - Google Project Hosting where you can try naxsi by yourself, as we setup the box as a reverse proxy to on-purpose vulnerable websites ! Wanna help ? You’re welcome ! We are currently looking for some web developers to setup a nicer forbidden page and even a reporting interface. We are as well looking for some people to test the software and give us some feedback. What’s next ? We are currently thinking very seriously about supporting mod security CRS level 1 in NAXSI, so that we can have the perfect firewall, fitting every kind of web sites ! So, stay tuned ! Source: Naxsi, open source WAF (Web Application Firewall) for NGINX Download: https://github.com/nbs-system/naxsi
  4. Acest tutorial este adresat celor care doresc sa isi configureze un server cu Debian. O sa acopar in in el urmatoarele aspecte: 1. Instalare kernel cu grsec. 2. Instalare si configurare apache. 3. Instalare si configurare php. 4. Instalare si module php (extensii). 5. Configurare suhosin. 6. Instalare MySQL Server. 7. Modificari diverse permisiuni pentru un nivel mai bun de securitate 8. Instalare nginx si folosirea lui ca frontend pentru apache (dual strat web server) ---------------- 1. Instalare kernel cu grsec. Daca nu stiti inca ce este grsec/grsecurity, un bun punct de plecare este Grsecurity. Pentru linux, grsecurity este un fel de "sfantul graal" in materie de securitate. In plus, va scapa de o problema ce o are linux si care pe mine ma irita: ps aux dupa user, arata toate procesele. root@tex:~# echo "deb http://debian.cr0.org/repo/ kernel-security/" >> /etc/apt/sources.list root@tex:~# wget http://kernelsec.cr0.org/kernel-security.asc root@tex:~# apt-key add kernel-security.asc OK root@tex:~# apt-get update root@tex:~# apt-cache search grsec linux-source-2.6.32.15-1-grsec - Linux kernel source for version 2.6.32.15-1-grsec linux-source-2.6.25.10-1-grsec - Linux kernel source for version 2.6.25.10-1-grsec linux-image-2.6.32.15-1-grsec - Linux kernel binary image for version 2.6.32.15-1-grsec linux-headers-2.6.32.15-1-grsec - Header files related to Linux kernel, specifically, linux-source-2.6.27.29-4-grsec - Linux kernel source for version 2.6.27.29-4-grsec root@tex:~# apt-get install linux-image-2.6.32.15-1-grsec linux-headers-2.6.32.15-1-grsec root@tex:~# init 6 # aici dam reboot pentru a boota noul kernel. // Dupa reboot root@tex:~# uname -a Linux tex 2.6.32.15-1-grsec #2 SMP Mon Jun 28 09:05:30 CEST 2010 x86_64 GNU/Linux root@tex:~# su - tex tex@tex:~$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND tex 2103 0.6 0.1 36908 1276 pts/0 S 00:58 0:00 su - tex tex 2104 13.0 0.6 23380 6200 pts/0 S 00:58 0:00 -su tex 2129 0.0 0.1 16332 1176 pts/0 R+ 00:58 0:00 ps aux Din cate observati, vad doar procesele mele dupa user. 2. Instalare si configurare apache. root@tex:~# apt-get install apache2-mpm-prefork apache2.2-common apache2.2-bin root@tex:~# rm /etc/apache2/sites-available/default root@tex:~# cat >> /etc/apache2/sites-available/default << EOF > NameVirtualHost * > > <Directory "/var/www"> > AllowOverride AuthConfig FileInfo Options Indexes Limit > Options FollowSymLinks > Options -Indexes > </Directory> > > <VirtualHost *> > DocumentRoot /var/www > ServerName 10.0.0.220 > CustomLog /var/log/apache2/access_log combined > ErrorLog /var/log/apache2/error_log > </VirtualHost> > EOF root@tex:~# Apache o sa-l listam pe 127.0.0.1 port 81 si o sa fie backend. root@tex:~# echo "Listen 127.0.0.1:81" > /etc/apache2/ports.conf root@tex:~# /etc/init.d/apache2 start 3. Instalare si configurare php (plus libapache2-mod-php5, necesar la apache (mod php)) PHP-ul o sa-l instalez de la dotdeb. root@tex:~# echo "deb http://packages.dotdeb.org stable all" >> /etc/apt/sources.list root@tex:~# echo "deb-src http://packages.dotdeb.org stable all" >> /etc/apt/sources.list root@tex:~# wget http://www.dotdeb.org/dotdeb.gpg root@tex:~# cat dotdeb.gpg |apt-key add - && rm dotdeb.gpg OK root@tex:~# apt-get update root@tex:~# apt-get install php5 php5-cli libapache2-mod-php5 php5-common php5-suhosin Inlocuiesc "expose_php = On" cu "expose_php = Off" / "short_open_tag = Off" cu "short_open_tag = On" si "session.name = PHPSESSID" cu "session.name = SERVLET" in php.ini pentru apache2. root@tex:~# perl -pi -e 's/expose_php = On/expose_php = Off/' /etc/php5/apache2/php.ini root@tex:~# perl -pi -e 's/short_open_tag = Off/short_open_tag = On/' /etc/php5/apache2/php.ini root@tex:~# perl -pi -e 's/PHPSESSID/SERVLET/' /etc/php5/apache2/php.ini 4. Instalare si configurare module php (extensii). O sa instalez urmatoarele extensii php: curl, gd, mcrypt, mysql. root@tex:~# apt-get install php5-curl php5-gd php5-mcrypt php5-mysql 5. Configurare suhosin. Din motive de securitate, o sa adaug in blacklisted utilizand suhosin urmatoarele functii: exec,shell_exec,passthru,show_source,dl,leak,ini_alter,ini_restore,proc_open,proc_nice,proc_terminate,proc_close,proc_get_status,symlink,system,popen,pcntl_getpriority,pcntl_wait,diskfreespace,disk_free_space,disk_total_space,get_current_user,stream_socket_accept,stream_socket_client,stream_socket_get_name,stream_socket_recvfrom,stream_socket_sendto,stream_socket_server,stream_socket_shutdown root@tex:~# cat >> /etc/php5/conf.d/suhosin.ini << EOF > > suhosin.executor.func.blacklist = "exec,shell_exec,passthru,show_source,dl,leak,ini_alter,ini_restore,proc_open,proc_nice,proc_terminate,proc_close,proc_get_status,symlink,system,popen,pcntl_getpriority,pcntl_wait,diskfreespace,disk_free_space,disk_total_space,get_current_user,stream_socket_accept,stream_socket_client,stream_socket_get_name,stream_socket_recvfrom,stream_socket_sendto,stream_socket_server,stream_socket_shutdown" > suhosin.cookie.max_array_depth = 256 > suhosin.cookie.max_array_index_length = 256 > suhosin.cookie.max_name_length = 256 > suhosin.cookie.max_totalname_length = 512 > suhosin.cookie.max_value_length = 20000 > suhosin.cookie.max_vars = 200 > suhosin.get.max_array_depth = 200 > suhosin.get.max_totalname_length = 1024 > suhosin.get.max_value_length = 1024 > suhosin.get.max_vars = 1024 > > > suhosin.post.max_array_depth = 1024 > suhosin.post.max_array_index_length = 1024 > suhosin.post.max_name_length = 1024 > suhosin.post.max_totalname_length = 1024 > suhosin.post.max_value_length = 95000 > suhosin.post.max_vars = 1024 > > suhosin.request.max_vars = 512 > suhosin.request.max_value_length = 90000 > suhosin.request.max_totalname_length = 1024 > suhosin.upload.max_uploads = 400 > > suhosin.executor.include.max_traversal = 2 > > EOF root@tex:~# Cam asa arata php in cli: root@tex:~# php -v PHP 5.3.8-1~dotdeb.2 with Suhosin-Patch (cli) (built: Aug 25 2011 13:30:46) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies with Suhosin v0.9.32.1, Copyright (c) 2007-2010, by SektionEins GmbH root@tex:~# 6. Instalare MySQL Server si MySQL Client root@tex:~# apt-get install mysql-client-5.5 mysql-server-5.5 7. Modificari diverse permisiuni pentru un nivel mai bun de securitate Mountam tmpfs in /tmp cu flag-urile "noexec,nosuid,nodev" din motive de securitate. root@tex:~# echo "tmpfs /tmp tmpfs noexec,nosuid,nodev 2 2" >> /etc/fstab root@tex:~# mount /tmp root@tex:~# mount |grep "/tmp" tmpfs on /tmp type tmpfs (rw,noexec,nosuid,nodev) Stergem "/var/tmp" si il facem simlink catre tmp. root@tex:~# rm -rf /var/tmp/ && ln -s /tmp /var/tmp Dam chmod 640 la "/dev/shm" din motive de securitate. root@tex:~# chmod 640 /dev/shm 8. Instalare nginx si folosirea lui ca frontend pentru apache (dual strat web server) O sa listam port 80 cu nginx si o sa-l folosim ca frontend pentru apache, care se listeaza pe 127.0.0.1 port 81. (reverse proxy) root@tex:~# apt-get install nginx root@tex:~# rm /etc/nginx/sites-enabled/default root@tex:~# pico /etc/nginx/sites-enabled/default # fisier configuratie server { listen 0.0.0.0:80 default; server_name _; access_log off; error_log /dev/null; location / { proxy_pass http://127.0.0.1:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Pornim nginx-ul. root@tex:~# /etc/init.d/nginx start Starting nginx: nginx. root@tex:~# [URL="http://i42.tinypic.com/121zmtx.png"]O sa pun un phpinfo in "/var/www/"[/URL] pentru a vedea daca este totul in ordine si o sa sterg index.html (default) root@tex:~# echo "<?php phpinfo(); ?>" >> /var/www/index.php root@tex:~# rm /var/www/index.html // restart la apache. root@tex:~# /etc/init.d/apache2 restart --------- Note: - Daca aveti intrebari legate de acest tutorial, va raspund cu cea mai mare placere. - Imi cer scuze pentru eventualele greseli legate de exprimare (am cam tras chiulul de la somn) - Nu am specificat sursa acestui tutorial pentru ca este facut de mine.
×
×
  • Create New...