[cpif] r220 - trunk/frontend-web

svn at argo.es svn at argo.es
Sun Jul 1 19:55:19 CEST 2007


Author: jcea
Date: Sun Jul  1 19:55:17 2007
New Revision: 220

Log:
Implementamos soporte de conexiones persistentes
(opcional).

Se incluyen dos timeout: uno para la primera peticion
y otro para las siguientes.

Sin este codigo, el microservidor web se queda esperando
eternamente que el cliente web le haga una peticion.

Por esto y por otras muchas razones, en produccion CPIF
debe estar DETRAS de un servidor APACHE/SQUID o similar.
Los usuarios finales no deberian poder conectar directamente
al microservidor web de CPIF. Sino es trivial bloquear
el servicio, consumir cantidades de memoria arbitrarias, etc.

Esto es asi por disen~o, y no hay planes para mejorarlo.



Modified:
   trunk/frontend-web/globales.py
   trunk/frontend-web/servidor_web.py

Modified: trunk/frontend-web/globales.py
==============================================================================
--- trunk/frontend-web/globales.py	(original)
+++ trunk/frontend-web/globales.py	Sun Jul  1 19:55:17 2007
@@ -21,6 +21,11 @@
 # Simultaneous HTTP connections
 http_max_clients=16
 
+# Specify initial HTTP timeout
+http_initial_timeout=30
+# Keep-alive timeout (if zero or False, no keep-alive support)
+http_keep_alive_timeout=5
+
 # OpenID Support
 openid_support=True
 

Modified: trunk/frontend-web/servidor_web.py
==============================================================================
--- trunk/frontend-web/servidor_web.py	(original)
+++ trunk/frontend-web/servidor_web.py	Sun Jul  1 19:55:17 2007
@@ -1,6 +1,8 @@
 # $Id$
 
-from globales import monitor,allow_anonymous,http_max_clients
+from globales import monitor,allow_anonymous
+from globales import http_max_clients
+from globales import http_initial_timeout,http_keep_alive_timeout
 
 urls={}
 
@@ -33,6 +35,37 @@
   class handler(BaseHTTPRequestHandler) :
     must_stop=False
 
+    if http_keep_alive_timeout : # Persistent connections
+      protocol_version="HTTP/1.1"
+
+    first_request=True
+
+# The following method is ripped from BaseHTTPServer,
+# implementing timeouts. When updating Python, coders
+# should verify code consistence with BaseHTTPServer
+# codebase.
+    def handle_one_request(self):
+        global http_initial_timeout,http_keep_alive_timeout
+        import select
+        v,dummy,dummy2=select.select([self.rfile],[],[],http_initial_timeout if self.first_request else http_keep_alive_timeout)
+        self.first_request=False
+        if v==[] :
+          self.close_connection = 1
+          return
+
+        self.raw_requestline = self.rfile.readline()
+        if not self.raw_requestline:
+            self.close_connection = 1
+            return
+        if not self.parse_request(): # An error code has been sent, just exit
+            return
+        mname = 'do_' + self.command
+        if not hasattr(self, mname):
+            self.send_error(501, "Unsupported method (%r)" % self.command)
+            return
+        method = getattr(self, mname)
+        method()
+
     def do_GET(self) :
       global urls,allow_anonymous
       cookie=self.headers.get("cookie",None)
@@ -72,7 +105,11 @@
         print >>sys.stderr,"EXCEPCION:",time.ctime()
         raise
 
+      # We need this for HTTP persistent connections
+      resultado[1]["Content-Length"]=len(resultado[2])
+
       self.send_response(resultado[0])
+
       for i,j in resultado[1].iteritems() :
         self.send_header(i,j)
       self.end_headers()



More information about the cpif mailing list