Dealing with Request Data

最后更新于:2022-04-01 04:08:25

# Dealing with Request Data The most important rule about web development is “Do not trust the user”.This is especially true for incoming request data on the input stream.With WSGI this is actually a bit harder than you would expect. Becauseof that Werkzeug wraps the request stream for you to save you from themost prominent problems with it. ### Missing EOF Marker on Input Stream The input stream has no end-of-file marker. If you would call theread() method on the wsgi.input stream you would cause yourapplication to hang on conforming servers. This is actually intentionalhowever painful. Werkzeug solves that problem by wrapping the inputstream in a special LimitedStream. The input stream is exposedon the request objects as stream. This one is eitheran empty stream (if the form data was parsed) or a limited stream withthe contents of the input stream. ### When does Werkzeug Parse? Werkzeug parses the incoming data under the following situations: - you access either form, files,or stream and the request method wasPOST or PUT. - if you call parse_form_data(). These calls are not interchangeable. If you invoke parse_form_data()you must not use the request object or at least not the attributes thattrigger the parsing process. This is also true if you read from the wsgi.input stream before theparsing. **General rule:** Leave the WSGI input stream alone. Especially inWSGI middlewares. Use either the parsing functions or the requestobject. Do not mix multiple WSGI utility libraries for form dataparsing or anything else that works on the input stream. ### How does it Parse? The standard Werkzeug parsing behavior handles three cases: - input content type was multipart/form-data. In this situation thestream will be empty andform will contain the regular POST / PUTdata, files will contain the uploadedfiles as FileStorage objects. - input content type was application/x-www-form-urlencoded. Then thestream will be empty andform will contain the regular POST / PUTdata and files will be empty. - the input content type was neither of them, streampoints to a LimitedStream with the input data for furtherprocessing. Special note on the get_data method: Calling thisloads the full request data into memory. This is only safe to do if themax_content_length is set. Also you can *either*read the stream *or* call get_data(). ### Limiting Request Data To avoid being the victim of a DDOS attack you can set the maximumaccepted content length and request field sizes. The BaseRequestclass has two attributes for that: max_content_lengthand max_form_memory_size. The first one can be used to limit the total content length. For exampleby setting it to 1024*1024*16 the request won't accept more than16MB of transmitted data. Because certain data can't be moved to the hard disk (regular post data)whereas temporary files can, there is a second limit you can set. Themax_form_memory_size limits the size of POSTtransmitted form data. By setting it to 1024*1024*2 you can makesure that all in memory-stored fields is not more than 2MB in size. This however does *not* affect in-memory stored files if thestream_factory used returns a in-memory file. ### How to extend Parsing? Modern web applications transmit a lot more than multipart form data orurl encoded data. Extending the parsing capabilities by subclassingthe BaseRequest is simple. The following example implementsparsing for incoming JSON data: ~~~ from werkzeug.utils import cached_property from werkzeug.wrappers import Request from simplejson import loads class JSONRequest(Request): # accept up to 4MB of transmitted data. max_content_length = 1024 * 1024 * 4 @cached_property def json(self): if self.headers.get('content-type') == 'application/json': return loads(self.data) ~~~
';

Unicode

最后更新于:2022-04-01 04:08:22

# Unicode Since early Python 2 days unicode was part of all default Python builds. Itallows developers to write applications that deal with non-ASCII charactersin a straightforward way. But working with unicode requires a basic knowledgeabout that matter, especially when working with libraries that do not supportit. Werkzeug uses unicode internally everywhere text data is assumed, even if theHTTP standard is not unicode aware as it. Basically all incoming data isdecoded from the charset specified (per default utf-8) so that you don'toperate on bytestrings any more. Outgoing unicode data is then encoded intothe target charset again. ### Unicode in Python In Python 2 there are two basic string types: str and unicode. str maycarry encoded unicode data but it's always represented in bytes whereas theunicode type does not contain bytes but charpoints. What does this mean?Imagine you have the German Umlaut ö. In ASCII you cannot represent thatcharacter, but in the latin-1 and utf-8 character sets you can representit, but they look differently when encoded: ~~~ >>> u'ö'.encode('latin1') '\xf6' >>> u'ö'.encode('utf-8') '\xc3\xb6' ~~~ So an ö might look totally different depending on the encoding which makesit hard to work with it. The solution is using the unicode type (as we didabove, note the u prefix before the string). The unicode type does notstore the bytes for ö but the information, that this is aLATINSMALLLETTEROWITHDIAERESIS. Doing len(u'ö') will always give us the expected “1” but len('ö')might give different results depending on the encoding of 'ö'. ### Unicode in HTTP The problem with unicode is that HTTP does not know what unicode is. HTTPis limited to bytes but this is not a big problem as Werkzeug decodes andencodes for us automatically all incoming and outgoing data. Basically whatthis means is that data sent from the browser to the web application is perdefault decoded from an utf-8 bytestring into a unicode string. Data sentfrom the application back to the browser that is not yet a bytestring is thenencoded back to utf-8. Usually this “just works” and we don't have to worry about it, but there aresituations where this behavior is problematic. For example the Python 2 IOlayer is not unicode aware. This means that whenever you work with data fromthe file system you have to properly decode it. The correct way to loada text file from the file system looks like this: ~~~ f = file('/path/to/the_file.txt', 'r') try: text = f.decode('utf-8') # assuming the file is utf-8 encoded finally: f.close() ~~~ There is also the codecs module which provides an open function that decodesautomatically from the given encoding. ### Error Handling With Werkzeug 0.3 onwards you can further control the way Werkzeug works withunicode. In the past Werkzeug ignored encoding errors silently on incomingdata. This decision was made to avoid internal server errors if the usertampered with the submitted data. However there are situations where youwant to abort with a 400 BAD REQUEST instead of silently ignoring the error. All the functions that do internal decoding now accept an errors keywordargument that behaves like the errors parameter of the builtin string methoddecode. The following values are possible: ignoreThis is the default behavior and tells the codec to ignore characters thatit doesn't understand silently.replaceThe codec will replace unknown characters with a replacement character(U+FFFDREPLACEMENTCHARACTER)strictRaise an exception if decoding fails. Unlike the regular python decoding Werkzeug does not raise an[UnicodeDecodeError](http://docs.python.org/dev/library/exceptions.html#UnicodeDecodeError "(在 Python v3.5)") [http://docs.python.org/dev/library/exceptions.html#UnicodeDecodeError] if the decoding failed but an[HTTPUnicodeError](# "werkzeug.exceptions.HTTPUnicodeError") whichis a direct subclass of UnicodeError and the BadRequest HTTP exception.The reason is that if this exception is not caught by the application buta catch-all for HTTP exceptions exists a default 400 BAD REQUEST errorpage is displayed. There is additional error handling available which is a Werkzeug extensionto the regular codec error handling which is called fallback. Often youwant to use utf-8 but support latin1 as legacy encoding too if decodingfailed. For this case you can use the fallback error handling. Forexample you can specify 'fallback:iso-8859-15' to tell Werkzeug it shouldtry with iso-8859-15 if utf-8 failed. If this decoding fails too (whichshould not happen for most legacy charsets such as iso-8859-15) the erroris silently ignored as if the error handling was ignore. Further details are available as part of the API documentation of the concreteimplementations of the functions or classes working with unicode. ### Request and Response Objects As request and response objects usually are the central entities of Werkzeugpowered applications you can change the default encoding Werkzeug operates onby subclassing these two classes. For example you can easily set theapplication to utf-7 and strict error handling: ~~~ from werkzeug.wrappers import BaseRequest, BaseResponse class Request(BaseRequest): charset = 'utf-7' encoding_errors = 'strict' class Response(BaseResponse): charset = 'utf-7' ~~~ Keep in mind that the error handling is only customizable for all decodingbut not encoding. If Werkzeug encounters an encoding error it will raise a[UnicodeEncodeError](http://docs.python.org/dev/library/exceptions.html#UnicodeEncodeError "(在 Python v3.5)") [http://docs.python.org/dev/library/exceptions.html#UnicodeEncodeError]. It's your responsibility to not create data that isnot present in the target charset (a non issue with all unicode encodingssuch as utf-8).
';

Important Terms

最后更新于:2022-04-01 04:08:20

# Important Terms This page covers important terms used in the documentation and Werkzeugitself. ### WSGI WSGI a specification for Python web applications Werkzeug follows. It wasspecified in the [**PEP 333**](http://www.python.org/dev/peps/pep-0333) [http://www.python.org/dev/peps/pep-0333] and is widely supported. Unlike previous solutionsit gurantees that web applications, servers and utilties can work together. ### Response Object For Werkzeug, a response object is an object that works like a WSGIapplication but does not do any request processing. Usually you have a viewfunction or controller method that processes the request and assambles aresponse object. A response object is *not* necessarily the BaseResponse object or asubclass thereof. For example Pylons/webob provide a very similar response class that canbe used as well (webob.Response). ### View Function Often people speak of MVC (Model, View, Controller) when developing webapplications. However, the Django framework coined MTV (Model, Template,View) which basically means the same but reduces the concept to the datamodel, a function that processes data from the request and the database andrenders a template. Werkzeug itself does not tell you how you should develop applications, but thedocumentation often speaks of view functions that work roughly the same. Theidea of a view function is that it's called with a request object (andoptionally some parameters from an URL rule) and returns a response object.
';

Werkzeug Changelog

最后更新于:2022-04-01 04:08:18

# Werkzeug Changelog This file lists all major changes in Werkzeug over the versions.For API breaking changes have a look at [*API Changes*](#), theyare listed there in detail. ### Werkzeug Changelog ### Version 0.10 Release date and codename to be decided - Changed the error handling of and improved testsuite for the caches incontrib.cache. - Fixed a bug on Python 3 when creating adhoc ssl contexts, due to sys.maxintnot being defined. - Fixed a bug on Python 3, that causedmake_ssl_devcert() to fail with an exception. - Added exceptions for 504 and 505. - Added support for ChromeOS detection. - Added UUID converter to the routing system. - Added message that explains how to quit the server. - Fixed a bug on Python 2, that caused len for[werkzeug.datastructures.CombinedMultiDict](# "werkzeug.datastructures.CombinedMultiDict") to crash. - Added support for stdlib pbkdf2 hmac if a compatible digestis found. ### Version 0.9.5 (bugfix release, release date to be decided) - Forward charset argument from request objects to the environbuilder. - Fixed error handling for missing boundaries in multipart data. - Fixed session creation on systems without os.urandom(). - Fixed pluses in dictionary keys not being properly URL encoded. - Fixed a problem with deepcopy not working for multi dicts. - Fixed a double quoting issue on redirects. - Fixed a problem with unicode keys appearing in headers on 2.x. - Fixed a bug with unicode strings in the test builder. - Fixed a unicode bug on Python 3 in the WSGI profiler. ### Version 0.9.4 (bugfix release, released on August 26th 2013) - Fixed an issue with Python 3.3 and an edge case in cookie parsing. - Fixed decoding errors not handled properly through the WSGIdecoding dance. - Fixed URI to IRI conversion incorrectly decoding percent signs. ### Version 0.9.3 (bugfix release, released on July 25th 2013) - Restored behavior of the data descriptor of the request class to pre 0.9behavior. This now also means that .data and .get_data() havedifferent behavior. New code should use .get_data() always. In addition to that there is now a flag for the .get_data() method thatcontrols what should happen with form data parsing and the form parser willhonor cached data. This makes dealing with custom form data more consistent. ### Version 0.9.2 (bugfix release, released on July 18th 2013) - Added unsafe parameter to [url_quote()](# "werkzeug.urls.url_quote"). - Fixed an issue with [url_quote_plus()](# "werkzeug.urls.url_quote_plus") not quoting‘+' correctly. - Ported remaining parts of RedisCache toPython 3.3. - Ported remaining parts of MemcachedCache toPython 3.3 - Fixed a deprecation warning in the contrib atom module. - Fixed a regression with setting of content types through theheaders dictionary instead with the content type parameter. - Use correct name for stdlib secure string comparision function. - Fixed a wrong reference in the docstring of[release_local()](# "werkzeug.local.release_local"). - Fixed an AttributeError that sometimes occurred when accessing the[werkzeug.wrappers.BaseResponse.is_streamed](# "werkzeug.wrappers.BaseResponse.is_streamed") attribute. ### Version 0.9.1 (bugfix release, released on June 14th 2013) - Fixed an issue with integers no longer being accepted in certainparts of the routing system or URL quoting functions. - Fixed an issue with url_quote not producing the right escapecodes for single digit codepoints. - Fixed an issue with [SharedDataMiddleware](# "werkzeug.wsgi.SharedDataMiddleware") notreading the path correctly and breaking on etag generation in somecases. - Properly handle Expect: 100-continue in the development serverto resolve issues with curl. - Automatically exhaust the input stream on request close. This shouldfix issues where not touching request files results in a timeout. - Fixed exhausting of streams not doing anything if a non-limitedstream was passed into the multipart parser. - Raised the buffer sizes for the multipart parser. ### Version 0.9 Released on June 13nd 2013, codename Planierraupe. - Added support for [tell()](# "werkzeug.wsgi.LimitedStream.tell")on the limited stream. - [ETags](# "werkzeug.datastructures.ETags") now is nonzero if itcontains at least one etag of any kind, including weak ones. - Added a workaround for a bug in the stdlib for SSL servers. - Improved SSL interface of the devserver so that it can generatecertificates easily and load them from files. - Refactored test client to invoke the open method on the classfor redirects. This makes subclassing more powerful. - [werkzeug.wsgi.make_chunk_iter()](# "werkzeug.wsgi.make_chunk_iter") and[werkzeug.wsgi.make_line_iter()](# "werkzeug.wsgi.make_line_iter") now support processing ofiterators and streams. - URL generation by the routing system now no longer quotes+. - URL fixing now no longer quotes certain reserved characters. - The [werkzeug.security.generate_password_hash()](# "werkzeug.security.generate_password_hash") andcheck functions now support any of the hashlib algorithms. - wsgi.get_current_url is now ascii safe for browsers sendingnon-ascii data in query strings. - improved parsing behavior for [werkzeug.http.parse_options_header()](# "werkzeug.http.parse_options_header") - added more operators to local proxies. - added a hook to override the default converter in the routingsystem. - The description field of HTTP exceptions is now always escaped.Use markup objects to disable that. - Added number of proxy argument to the proxy fix to make it moresecure out of the box on common proxy setups. It will by defaultno longer trust the x-forwarded-for header as much as it didbefore. - Added support for fragment handling in URI/IRI functions. - Added custom class support for [werkzeug.http.parse_dict_header()](# "werkzeug.http.parse_dict_header"). - Renamed LighttpdCGIRootFix to CGIRootFix. - Always treat + as safe when fixing URLs as people love misusing them. - Added support to profiling into directories in the contrib profiler. - The escape function now by default escapes quotes. - Changed repr of exceptions to be less magical. - Simplified exception interface to no longer require environmntsto be passed to recieve the response object. - Added sentinel argument to IterIO objects. - Added pbkdf2 support for the security module. - Added a plain request type that disables all form parsing to onlyleave the stream behind. - Removed support for deprecated fix_headers. - Removed support for deprecated header_list. - Removed support for deprecated parameter for iter_encoded. - Removed support for deprecated non-silent usage of the limitedstream object. - Removed support for previous dummy writable parameter onthe cached property. - Added support for explicitly closing request objects to closeassociated resources. - Conditional request handling or access to the data property on responses nolonger ignores direct passthrough mode. - Removed werkzeug.templates and werkzeug.contrib.kickstart. - Changed host lookup logic for forwarded hosts to allow lists ofhosts in which case only the first one is picked up. - Added wsgi.get_query_string, wsgi.get_path_info andwsgi.get_script_name and made the wsgi.pop_path_info andwsgi.peek_path_info functions perform unicode decoding. Thiswas necessary to avoid having to expose the WSGI encoding danceon Python 3. - Added content_encoding and content_md5 to the request object'scommon request descriptor mixin. - added options and trace to the test client. - Overhauled the utilization of the input stream to be easier to useand better to extend. The detection of content payload on the inputside is now more compliant with HTTP by detecting off the contenttype header instead of the request method. This also now means thatthe stream property on the request class is always available insteadof just when the parsing fails. - Added support for using [werkzeug.wrappers.BaseResponse](# "werkzeug.wrappers.BaseResponse") in a withstatement. - Changed get_app_iter to fetch the response early so that it does notfail when wrapping a response iterable. This makes filtering easier. - Introduced get_data and set_data methods for responses. - Introduced get_data for requests. - Soft deprecated the data descriptors for request and response objects. - Added as_bytes operations to some of the headers to simplify workingwith things like cookies. - Made the debugger paste tracebacks into github's gist service asprivate pastes. ### Version 0.8.4 (bugfix release, release date to be announced) - Added a favicon to the debugger which fixes problem withstate changes being triggered through a request to/favicon.ico in Google Chrome. This should fix someproblems with Flask and other frameworks that usecontext local objects on a stack with context preservationon errors. - Fixed an issue with scolling up in the debugger. - Fixed an issue with debuggers running on a different URLthan the URL root. - Fixed a problem with proxies not forwarding some rarelyused special methods properly. - Added a workaround to prevent the XSS protection from Chromebreaking the debugger. - Skip redis tests if redis is not running. - Fixed a typo in the multipart parser that caused content-typeto not be picked up properly. ### Version 0.8.3 (bugfix release, released on February 5th 2012) - Fixed another issue with [werkzeug.wsgi.make_line_iter()](# "werkzeug.wsgi.make_line_iter")where lines longer than the buffer size were not handledproperly. - Restore stdout after debug console finished executing sothat the debugger can be used on GAE better. - Fixed a bug with the redis cache for int subclasses(affects bool caching). - Fixed an XSS problem with redirect targets coming fromuntrusted sources. - Redis cache backend now supports password authentication. ### Version 0.8.2 (bugfix release, released on December 16th 2011) - Fixed a problem with request handling of the builtin servernot responding to socket errors properly. - The routing request redirect exception's code attribute is nowused properly. - Fixed a bug with shutdowns on Windows. - Fixed a few unicode issues with non-ascii characters beinghardcoded in URL rules. - Fixed two property docstrings being assigned to fdel insteadof __doc__. - Fixed an issue where CRLF line endings could be split into twoby the line iter function, causing problems with multipart fileuploads. ### Version 0.8.1 (bugfix release, released on September 30th 2011) - Fixed an issue with the memcache not working properly. - Fixed an issue for Python 2.7.1 and higher that brokecopying of multidicts with [copy.copy()](http://docs.python.org/dev/library/copy.html#copy.copy "(在 Python v3.5)") [http://docs.python.org/dev/library/copy.html#copy.copy]. - Changed hashing methodology of immutable ordered multi dictsfor a potential problem with alternative Python implementations. ### Version 0.8 Released on September 29th 2011, codename Lötkolben - Removed data structure specific KeyErrors for a generalpurpose BadRequestKeyError. - Documented werkzeug.wrappers.BaseRequest._load_form_data(). - The routing system now also accepts strings instead ofdictionaries for the query_args parameter since we're onlypassing them through for redirects. - Werkzeug now automatically sets the content length immediately whenthe [data](# "werkzeug.wrappers.BaseResponse.data") attribute is setfor efficiency and simplicity reasons. - The routing system will now normalize server names to lowercase. - The routing system will no longer raise ValueErrors in case theconfiguration for the server name was incorrect. This should makedeployment much easier because you can ignore that factor now. - Fixed a bug with parsing HTTP digest headers. It rejected headerswith missing nc and nonce params. - Proxy fix now also updates wsgi.url_scheme based on X-Forwarded-Proto. - Added support for key prefixes to the redis cache. - Added the ability to suppress some auto corrections in the wrappersthat are now controlled via autocorrect_location_header andautomatically_set_content_length on the response objects. - Werkzeug now uses a new method to check that the length of incomingdata is complete and will raise IO errors by itself if the serverfails to do so. - [make_line_iter()](# "werkzeug.wsgi.make_line_iter") now requires a limit that isnot higher than the length the stream can provide. - Refactored form parsing into a form parser class that makes it possibleto hook into individual parts of the parsing process for debugging andextending. - For conditional responses the content length is no longer set when itis already there and added if missing. - Immutable datastructures are hashable now. - Headers datastructure no longer allows newlines in values to avoidheader injection attacks. - Made it possible through subclassing to select a different remoteaddr in the proxy fix. - Added stream based URL decoding. This reduces memory usage on largetransmitted form data that is URL decoded since Werkzeug will no longerload all the unparsed data into memory. - Memcache client now no longer uses the buggy cmemcache module andsupports pylibmc. GAE is not tried automatically and the dedicatedclass is no longer necessary. - Redis cache now properly serializes data. - Removed support for Python 2.4 ### Version 0.7.2 (bugfix release, released on September 30th 2011) - Fixed a CSRF problem with the debugger. - The debugger is now generating private pastes on lodgeit. - If URL maps are now bound to environments the query argumentsare properly decoded from it for redirects. ### Version 0.7.1 (bugfix release, released on July 26th 2011) - Fixed a problem with newer versions of IPython. - Disabled pyinotify based reloader which does not work reliably. ### Version 0.7 Released on July 24th 2011, codename Schraubschlüssel - Add support for python-libmemcached to the Werkzeug cache abstractionlayer. - Improved url_decode() and url_encode() performance. - Fixed an issue where the SharedDataMiddleware could cause aninternal server error on weird paths when loading via pkg_resources. - Fixed an URL generation bug that caused URLs to be invalid if agenerated component contains a colon. - werkzeug.import_string() now works with partially set uppackages properly. - Disabled automatic socket switching for IPv6 on the developmentserver due to problems it caused. - Werkzeug no longer overrides the Date header when creating aconditional HTTP response. - The routing system provides a method to retrieve the matchingmethods for a given path. - The routing system now accepts a parameter to change the encodingerror behaviour. - The local manager can now accept custom ident functions in theconstructor that are forwarded to the wrapped local objects. - url_unquote_plus now accepts unicode strings again. - Fixed an issue with the filesystem session support's prunefunction and concurrent usage. - Fixed a problem with external URL generation discarding the port. - Added support for pylibmc to the Werkzeug cache abstraction layer. - Fixed an issue with the new multipart parser that happened whena linebreak happened to be on the chunk limit. - Cookies are now set properly if ports are in use. A runtime erroris raised if one tries to set a cookie for a domain without a dot. - Fixed an issue with Template.from_file not working for filedescriptors. - Reloader can now use inotify to track reloads. This requires thepyinotify library to be installed. - Werkzeug debugger can now submit to custom lodgeit installations. - redirect function's status code assertion now allows 201 to be usedas redirection code. While it's not a real redirect, it sharesenough with redirects for the function to still be useful. - Fixed securecookie for pypy. - Fixed ValueErrors being raised on calls to best_match onMIMEAccept objects when invalid user data was supplied. - Deprecated werkzeug.contrib.kickstart and werkzeug.contrib.testtools - URL routing now can be passed the URL arguments to keep them forredirects. In the future matching on URL arguments might also bepossible. - Header encoding changed from utf-8 to latin1 to support a port toPython 3. Bytestrings passed to the object stay untouched whichmakes it possible to have utf-8 cookies. This is a part wherethe Python 3 version will later change in that it will alwaysoperate on latin1 values. - Fixed a bug in the form parser that caused the last character tobe dropped off if certain values in multipart data are used. - Multipart parser now looks at the part-individual content typeheader to override the global charset. - Introduced mimetype and mimetype_params attribute for the filestorage object. - Changed FileStorage filename fallback logic to skip special filenamesthat Python uses for marking special files like stdin. - Introduced more HTTP exception classes. - call_on_close now can be used as a decorator. - Support for redis as cache backend. - Added BaseRequest.scheme. - Support for the RFC 5789 PATCH method. - New custom routing parser and better ordering. - Removed support for is_behind_proxy. Use a WSGI middlewareinstead that rewrites the REMOTE_ADDR according to your setup.Also see the [werkzeug.contrib.fixers.ProxyFix](# "werkzeug.contrib.fixers.ProxyFix") fora drop-in replacement. - Added cookie forging support to the test client. - Added support for host based matching in the routing system. - Switched from the default ‘ignore' to the better ‘replace'unicode error handling mode. - The builtin server now adds a function named ‘werkzeug.server.shutdown'into the WSGI env to initiate a shutdown. This currently only worksin Python 2.6 and later. - Headers are now assumed to be latin1 for better compatibility withPython 3 once we have support. - Added [werkzeug.security.safe_join()](# "werkzeug.security.safe_join"). - Added accept_json property analogous to accept_html on the[werkzeug.datastructures.MIMEAccept](# "werkzeug.datastructures.MIMEAccept"). - [werkzeug.utils.import_string()](# "werkzeug.utils.import_string") now fails with much bettererror messages that pinpoint to the problem. - Added support for parsing of the If-Range header([werkzeug.http.parse_if_range_header()](# "werkzeug.http.parse_if_range_header") and[werkzeug.datastructures.IfRange](# "werkzeug.datastructures.IfRange")). - Added support for parsing of the Range header([werkzeug.http.parse_range_header()](# "werkzeug.http.parse_range_header") and[werkzeug.datastructures.Range](# "werkzeug.datastructures.Range")). - Added support for parsing of the Content-Range header of responsesand provided an accessor object for it([werkzeug.http.parse_content_range_header()](# "werkzeug.http.parse_content_range_header") and[werkzeug.datastructures.ContentRange](# "werkzeug.datastructures.ContentRange")). ### Version 0.6.2 (bugfix release, released on April 23th 2010) - renamed the attribute implicit_seqence_conversion attribute of therequest object to implicit_sequence_conversion. ### Version 0.6.1 (bugfix release, released on April 13th 2010) - heavily improved local objects. Should pick up standalone greenletbuilds now and support proxies to free callables as well. There isalso a stacked local now that makes it possible to invoke the sameapplication from within itself by pushing current request/responseon top of the stack. - routing build method will also build non-default method rules properlyif no method is provided. - added proper IPv6 support for the builtin server. - windows specific filesystem session store fixes.(should now be more stable under high concurrency) - fixed a NameError in the session system. - fixed a bug with empty arguments in the werkzeug.script system. - fixed a bug where log lines will be duplicated if an application useslogging.basicConfig() (#499) - added secure password hashing and checking functions. - HEAD is now implicitly added as method in the routing system ifGET is present. Not doing that was considered a bug because oftencode assumed that this is the case and in web servers that do notnormalize HEAD to GET this could break HEAD requests. - the script support can start SSL servers now. ### Version 0.6 Released on Feb 19th 2010, codename Hammer. - removed pending deprecations - sys.path is now printed from the testapp. - fixed an RFC 2068 incompatibility with cookie value quoting. - the FileStorage now gives access to the multipart headers. - cached_property.writeable has been deprecated. - MapAdapter.match() now accepts a return_rule keyword argumentthat returns the matched Rule instead of just the endpoint - [routing.Map.bind_to_environ()](# "routing.Map.bind_to_environ") raises a more correct error messagenow if the map was bound to an invalid WSGI environment. - added support for SSL to the builtin development server. - Response objects are no longer modified in place when they are evaluatedas WSGI applications. For backwards compatibility the fix_headersfunction is still called in case it was overridden.You should however change your application to use get_wsgi_headers ifyou need header modifications before responses are sent as the backwardscompatibility support will go away in future versions. - append_slash_redirect() no longer requires the QUERY_STRING to bein the WSGI environment. - added [DynamicCharsetResponseMixin](# "werkzeug.contrib.wrappers.DynamicCharsetResponseMixin") - added [DynamicCharsetRequestMixin](# "werkzeug.contrib.wrappers.DynamicCharsetRequestMixin") - added BaseRequest.url_charset - request and response objects have a default __repr__ now. - builtin data structures can be pickled now. - the form data parser will now look at the filename instead thecontent type to figure out if it should treat the upload as regularform data or file upload. This fixes a bug with Google Chrome. - improved performance of make_line_iter and the multipart parserfor binary uploads. - fixed is_streamed - fixed a path quoting bug in EnvironBuilder that caused PATH_INFO andSCRIPT_NAME to end up in the environ unquoted. - werkzeug.BaseResponse.freeze() now sets the content length. - for unknown HTTP methods the request stream is now always limitedinstead of being empty. This makes it easier to implement DAVand other protocols on top of Werkzeug. - added werkzeug.MIMEAccept.best_match() - multi-value test-client posts from a standard dictionary are nowsupported. Previously you had to use a multi dict. - rule templates properly work with submounts, subdomains andother rule factories now. - deprecated non-silent usage of the werkzeug.LimitedStream. - added support for IRI handling to many parts of Werkzeug. - development server properly logs to the werkzeug logger now. - added werkzeug.extract_path_info() - fixed a querystring quoting bug in url_fix() - added fallback_mimetype to werkzeug.SharedDataMiddleware. - deprecated BaseResponse.iter_encoded()‘s charset parameter. - added BaseResponse.make_sequence(),BaseResponse.is_sequence andBaseResponse._ensure_sequence(). - added better __repr__ of werkzeug.Map - import_string accepts unicode strings as well now. - development server doesn't break on double slashes after the host name. - better __repr__ and __str__ of[werkzeug.exceptions.HTTPException](# "werkzeug.exceptions.HTTPException") - test client works correctly with multiple cookies now. - the werkzeug.routing.Map now has a class attribute withthe default converter mapping. This helps subclasses to overridethe converters without passing them to the constructor. - implemented OrderedMultiDict - improved the session support for more efficient session storingon the filesystem. Also added support for listing of sessionscurrently stored in the filesystem session store. - werkzeug no longer utilizes the Python time module for parsingwhich means that dates in a broader range can be parsed. - the wrappers have no class attributes that make it possible toswap out the dict and list types it uses. - werkzeug debugger should work on the appengine dev server now. - the URL builder supports dropping of unexpected arguments now.Previously they were always appended to the URL as query string. - profiler now writes to the correct stream. ### Version 0.5.1 (bugfix release for 0.5, released on July 9th 2009) - fixed boolean check of FileStorage - url routing system properly supports unicode URL rules now. - file upload streams no longer have to provide a truncate()method. - implemented BaseRequest._form_parsing_failed(). - fixed #394 - ImmutableDict.copy(), ImmutableMultiDict.copy() andImmutableTypeConversionDict.copy() return mutable shallowcopies. - fixed a bug with the make_runserver script action. - MultiDict.items() and MutiDict.iteritems() now accept anargument to return a pair for each value of each key. - the multipart parser works better with hand-crafted multipartrequests now that have extra newlines added. This fixes a bugwith setuptools uploades not handled properly (#390) - fixed some minor bugs in the atom feed generator. - fixed a bug with client cookie header parsing being case sensitive. - fixed a not-working deprecation warning. - fixed package loading for SharedDataMiddleware. - fixed a bug in the secure cookie that made server-side expirationon servers with a local time that was not set to UTC impossible. - fixed console of the interactive debugger. ### Version 0.5 Released on April 24th, codename Schlagbohrer. - requires Python 2.4 now - fixed a bug in IterIO - added MIMEAccept and CharsetAccept that work like theregular Accept but have extra special normalization for mimetypesand charsets and extra convenience methods. - switched the serving system from wsgiref to something homebrew. - the Client now supports cookies. - added the [fixers](# "werkzeug.contrib.fixers") module with variousfixes for webserver bugs and hosting setup side-effects. - added [werkzeug.contrib.wrappers](# "werkzeug.contrib.wrappers") - added is_hop_by_hop_header() - added is_entity_header() - added remove_hop_by_hop_headers() - added pop_path_info() - added peek_path_info() - added wrap_file() and FileWrapper - moved LimitedStream from the contrib package into the regularwerkzeug one and changed the default behavior to raise exceptionsrather than stopping without warning. The old class will stick inthe module until 0.6. - implemented experimental multipart parser that replaces the old CGI hack. - added dump_options_header() and parse_options_header() - added quote_header_value() and unquote_header_value() - url_encode() and url_decode() now accept a separatorargument to switch between & and ; as pair separator. The magicswitch is no longer in place. - all form data parsing functions as well as the BaseRequestobject have parameters (or attributes) to limit the number ofincoming bytes (either totally or per field). - added LanguageAccept - request objects are now enforced to be read only for all collections. - added many new collection classes, refactored collections in general. - test support was refactored, semi-undocumented werkzeug.test.Filewas replaced by werkzeug.FileStorage. - EnvironBuilder was added and unifies the previous distinctcreate_environ(), Client andBaseRequest.from_values(). They all work the same now whichis less confusing. - officially documented imports from the internal modules as undefinedbehavior. These modules were never exposed as public interfaces. - removed FileStorage.__len__ which previously made the objectfalsy for browsers not sending the content length which all browsersdo. - SharedDataMiddleware uses wrap_file now and has aconfigurable cache timeout. - added CommonRequestDescriptorsMixin - added CommonResponseDescriptorsMixin.mimetype_params - added [werkzeug.contrib.lint](# "werkzeug.contrib.lint") - added passthrough_errors to run_simple. - added secure_filename - added make_line_iter() - MultiDict copies now instead of revealing internallists to the caller for getlist and iteration functions thatreturn lists. - added follow_redirect to the [open()](http://docs.python.org/dev/library/functions.html#open "(在 Python v3.5)") [http://docs.python.org/dev/library/functions.html#open] of Client. - added support for extra_files inmake_runserver() ### Version 0.4.1 (Bugfix release, released on January 11th 2009) - werkzeug.contrib.cache.Memcached accepts now objects thatimplement the memcache.Client interface as alternative to a list ofstrings with server addresses.There is also now a GAEMemcachedCache that connects to the Googleappengine cache. - explicitly convert secret keys to bytestrings now because Python2.6 no longer does that. - url_encode and all interfaces that call it, support ordering ofoptions now which however is disabled by default. - the development server no longer resolves the addresses of clients. - Fixed a typo in werkzeug.test that broke File. - Map.bind_to_environ uses the Host header now if available. - Fixed BaseCache.get_dict (#345) - werkzeug.test.Client can now run the application buffered in whichcase the application is properly closed automatically. - Fixed Headers.set (#354). Caused header duplication before. - Fixed Headers.pop (#349). default parameter was not properlyhandled. - Fixed UnboundLocalError in create_environ (#351) - Headers is more compatible with wsgiref now. - Template.render accepts multidicts now. - dropped support for Python 2.3 ### Version 0.4 Released on November 23rd 2008, codename Schraubenzieher. - Client supports an empty data argument now. - fixed a bug in Response.application that made it impossible to use itas method decorator. - the session system should work on appengine now - the secure cookie works properly in load balanced environments withdifferent cpu architectures now. - CacheControl.no_cache and CacheControl.private behavior changed toreflect the possibilities of the HTTP RFC. Setting these attributes toNone or True now sets the value to “the empty value”.More details in the documentation. - fixed werkzeug.contrib.atom.AtomFeed.__call__. (#338) - BaseResponse.make_conditional now always returns self. Previouslyit didn't for post requests and such. - fixed a bug in boolean attribute handling of html and xhtml. - added graceful error handling to the debugger pastebin feature. - added a more list like interface to Headers (slicing and indexingworks now) - fixed a bug with the __setitem__ method of Headers that didn'tproperly remove all keys on replacing. - added remove_entity_headers which removes all entity headers froma list of headers (or a Headers object) - the responses now automatically call remove_entity_headers if thestatus code is 304. - fixed a bug with Href query parameter handling. Previously the lastitem of a call to Href was not handled properly if it was a dict. - headers now support a pop operation to better work with environproperties. ### Version 0.3.1 (bugfix release, released on June 24th 2008) - fixed a security problem with werkzeug.contrib.SecureCookie.More details available in the [release announcement](http://lucumr.pocoo.org/cogitations/2008/06/24/werkzeug-031-released/) [http://lucumr.pocoo.org/cogitations/2008/06/24/werkzeug-031-released/]. ### Version 0.3 Released on June 14th 2008, codename EUR325CAT6. - added support for redirecting in url routing. - added Authorization and AuthorizationMixin - added WWWAuthenticate and WWWAuthenticateMixin - added parse_list_header - added parse_dict_header - added parse_authorization_header - added parse_www_authenticate_header - added _get_current_object method to LocalProxy objects - added parse_form_data - MultiDict, CombinedMultiDict, Headers, and EnvironHeaders raisespecial key errors now that are subclasses of BadRequest so if youdon't catch them they give meaningful HTTP responses. - added support for alternative encoding error handling and the newHTTPUnicodeError which (if not caught) behaves like a BadRequest. - added BadRequest.wrap. - added ETag support to the SharedDataMiddleware and added an optionto disable caching. - fixed is_xhr on the request objects. - fixed error handling of the url adapter's dispatch method. (#318) - fixed bug with SharedDataMiddleware. - fixed Accept.values. - EnvironHeaders contain content-type and content-length now - url_encode treats lists and tuples in dicts passed to it as multiplevalues for the same key so that one doesn't have to pass a MultiDictto the function. - added validate_arguments - added BaseRequest.application - improved Python 2.3 support - run_simple accepts use_debugger and use_evalex parameters now,like the make_runserver factory function from the script module. - the environ_property is now read-only by default - it's now possible to initialize requests as “shallow” requests whichcauses runtime errors if the request object tries to consume theinput stream. ### Version 0.2 Released Feb 14th 2008, codename Faustkeil. - Added AnyConverter to the routing system. - Added werkzeug.contrib.securecookie - Exceptions have a get_response() method that return a response object - fixed the path ordering bug (#293), thanks Thomas Johansson - BaseReporterStream is now part of the werkzeug contrib module. FromWerkzeug 0.3 onwards you will have to import it from there. - added DispatcherMiddleware. - RequestRedirect is now a subclass of HTTPException and uses a301 status code instead of 302. - url_encode and url_decode can optionally treat keys as unicode stringsnow, too. - werkzeug.script has a different caller format for boolean arguments now. - renamed lazy_property to cached_property. - added import_string. - added is_* properties to request objects. - added empty() method to routing rules. - added werkzeug.contrib.profiler. - added extends to Headers. - added dump_cookie and parse_cookie. - added as_tuple to the Client. - added werkzeug.contrib.testtools. - added werkzeug.unescape - added BaseResponse.freeze - added werkzeug.contrib.atom - the HTTPExceptions accept an argument description now which overrides thedefault description. - the MapAdapter has a default for path info now. If you usebind_to_environ you don't have to pass the path later. - the wsgiref subclass werkzeug uses for the dev server does not use directsys.stderr logging any more but a logger called “werkzeug”. - implemented Href. - implemented find_modules - refactored request and response objects into base objects, mixins andfull featured subclasses that implement all mixins. - added simple user agent parser - werkzeug's routing raises MethodNotAllowed now if it matches arule but for a different method. - many fixes and small improvements ### Version 0.1 Released on Dec 9th 2007, codename Wictorinoxger. - Initial release ### API Changes 0.9 - Soft-deprecated the BaseRequest.data andBaseResponse.data attributes and introduced new methodsto interact with entity data. This will allows in the future tomake better APIs to deal with request and response entitybodies. So far there is no deprecation warning but users arestrongly encouraged to update. - The Headers and EnvironHeaders datastructuresare now designed to operate on unicode data. This is a backwardsincomaptible change and was necessary for the Python 3 support. - The Headers object no longer supports in-place operationsthrough the old linked method. This has been removed withoutreplacement due to changes on the encoding model. 0.6.2 - renamed the attribute implicit_seqence_conversion attribute ofthe request object to implicit_sequence_conversion. Becausethis is a feature that is typically unused and was only in therefor the 0.6 series we consider this a bug that does not requirebackwards compatibility support which would be impossible toproperly implement. 0.6 - Old deprecations were removed. - cached_property.writeable was deprecated. - BaseResponse.get_wsgi_headers() replaces the olderBaseResponse.fix_headers method. The older method staysaround for backwards compatibility reasons until 0.7. - BaseResponse.header_list was deprecated. You should notneed this function, get_wsgi_headers and the to_listmethod on the regular headers should serve as a replacement. - Deprecated BaseResponse.iter_encoded‘s charset parameter. - LimitedStream non-silent usage was deprecated. - the __repr__ of HTTP exceptions changed. This might breakdoctests. 0.5 - Werkzeug switched away from wsgiref as library for the builtinwebserver. - The encoding parameter for Templates is now calledcharset. The older one will work for another two versionsbut warn with a [DeprecationWarning](http://docs.python.org/dev/library/exceptions.html#DeprecationWarning "(在 Python v3.5)") [http://docs.python.org/dev/library/exceptions.html#DeprecationWarning]. - The Client has cookie support now which is enabledby default. - BaseResponse._get_file_stream() is now passed more parametersto make the function more useful. In 0.6 the old way to invokethe method will no longer work. To support both newer and olderWerkzeug versions you can add all arguments to the signature andprovide default values for each of them. - url_decode() no longer supports both & and ; asseparator. This has to be specified explicitly now. - The request object is now enforced to be read-only for allattributes. If your code relies on modifications of some valuesmakes sure to create copies of them using the mutable counterparts! - Some data structures that were only used on request objects arenow immutable as well. (Authorization / Acceptand subclasses) - CacheControl was splitted up into RequestCacheControland ResponseCacheControl, the former being immutable.The old class will go away in 0.6 - undocumented werkzeug.test.File was replaced byFileWrapper. - it's not longer possible to pass dicts inside the data dictin Client. Use tuples instead. - It's save to modify the return value of MultiDict.getlist()and methods that return lists in the MultiDict now. Theclass creates copies instead of revealing the internal lists.However MultiDict.setlistdefault still (and intentionally)returns the internal list for modifications. 0.3 - Werkzeug 0.3 will be the last release with Python 2.3 compatibility. - The environ_property is now read-only by default. This decision wasmade because the request in general should be considered read-only. 0.2 - The BaseReporterStream is now part of the contrib module, thenew module is werkzeug.contrib.reporterstream. Starting with0.3, the old import will not work any longer. - RequestRedirect now uses a 301 status code. Previously a 302status code was used incorrectly. If you want to continue usingthis 302 code, use response=redirect(e.new_url,302). - lazy_property is now called cached_property. The alias forthe old name will disappear in Werkzeug 0.3. - match can now raise MethodNotAllowed if configured formethods and there was no method for that request. - The response_body attribute on the response object is now calleddata. With Werkzeug 0.3 the old name will not work any longer. - The file-like methods on the response object are deprecated. Ifyou want to use the response object as file like object use theResponse class or a subclass of BaseResponse and mix the newResponseStreamMixin class and use response.stream.
';

额外说明

最后更新于:2022-04-01 04:08:15

';

Lint Validation Middleware

最后更新于:2022-04-01 04:08:13

# Lint Validation Middleware 0.5 新版功能. This module provides a middleware that performs sanity checks of the WSGIapplication. It checks that [**PEP 333**](http://www.python.org/dev/peps/pep-0333) is properly implemented and warnson some common HTTP errors such as non-empty responses for 304 statuscodes. This module provides a middleware, the [LintMiddleware](# "werkzeug.contrib.lint.LintMiddleware"). Wrap yourapplication with it and it will warn about common problems with WSGI andHTTP while your application is running. It's strongly recommended to use it during development. *class *werkzeug.contrib.lint.LintMiddleware(*app*) This middleware wraps an application and warns on common errors.Among other thing it currently checks for the following problems: - invalid status codes - non-bytestrings sent to the WSGI server - strings returned from the WSGI application - non-empty conditional responses - unquoted etags - relative URLs in the Location header - unsafe calls to wsgi.input - unclosed iterators Detected errors are emitted using the standard Python [warnings](http://docs.python.org/dev/library/warnings.html#module-warnings "(在 Python v3.5)") system and usually end up on stderr. ~~~ from werkzeug.contrib.lint import LintMiddleware app = LintMiddleware(app) ~~~ | 参数: | **app** – the application to wrap | |-----|-----|
';

WSGI Application Profiler

最后更新于:2022-04-01 04:08:11

# WSGI Application Profiler This module provides a simple WSGI profiler middleware for findingbottlenecks in web application. It uses the [profile](http://docs.python.org/dev/library/profile.html#module-profile "(在 Python v3.5)") or[cProfile](http://docs.python.org/dev/library/profile.html#module-cProfile "(在 Python v3.5)") module to do the profiling and writes the stats to thestream provided (defaults to stderr). Example usage: ~~~ from werkzeug.contrib.profiler import ProfilerMiddleware app = ProfilerMiddleware(app) ~~~ *class *werkzeug.contrib.profiler.MergeStream(**streams*) An object that redirects write calls to multiple streams.Use this to log to both sys.stdout and a file: ~~~ f = open('profiler.log', 'w') stream = MergeStream(sys.stdout, f) profiler = ProfilerMiddleware(app, stream) ~~~ *class *werkzeug.contrib.profiler.ProfilerMiddleware(*app*, *stream=None*, *sort_by=('time'*, *'calls')*, *restrictions=()*, *profile_dir=None*) Simple profiler middleware. Wraps a WSGI application and profilesa request. This intentionally buffers the response so that timings aremore exact. By giving the profile_dir argument, pstat.Stats files are saved to thatdirectory, one file per request. Without it, a summary is printed tostream instead. For the exact meaning of sort_by and restrictions consult the[profile](http://docs.python.org/dev/library/profile.html#module-profile "(在 Python v3.5)") documentation. 0.9 新版功能: Added support for restrictions and profile_dir. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>app</strong> – the WSGI application to profile.</li><li><strong>stream</strong> – the stream for the profiled stats. defaults to stderr.</li><li><strong>sort_by</strong> – a tuple of columns to sort the result by.</li><li><strong>restrictions</strong> – a tuple of profiling strictions, not used if dumpingto <cite>profile_dir</cite>.</li><li><strong>profile_dir</strong> – directory name to save pstat files</li></ul></td></tr></tbody></table> werkzeug.contrib.profiler.make_action(*app_factory*, *hostname='localhost'*, *port=5000*, *threaded=False*, *processes=1*, *stream=None*, *sort_by=('time'*, *'calls')*, *restrictions=()*) Return a new callback for werkzeug.script that starts a localserver with the profiler enabled. ~~~ from werkzeug.contrib import profiler action_profile = profiler.make_action(make_app) ~~~
';

Fixers

最后更新于:2022-04-01 04:08:08

# Fixers 0.5 新版功能. This module includes various helpers that fix bugs in web servers. They maybe necessary for some versions of a buggy web server but not others. We tryto stay updated with the status of the bugs as good as possible but you haveto make sure whether they fix the problem you encounter. If you notice bugs in webservers not fixed in this module considercontributing a patch. *class *werkzeug.contrib.fixers.CGIRootFix(*app*, *app_root='/'*) Wrap the application in this middleware if you are using FastCGI or CGIand you have problems with your app root being set to the cgi script's pathinstead of the path users are going to visit 在 0.9 版更改: Added app_root parameter and renamed from LighttpdCGIRootFix. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>app</strong> – the WSGI application</li><li><strong>app_root</strong> – Defaulting to <tt class="docutils literal"><span class="pre">'/'</span></tt>, you can set this to something elseif your app is mounted somewhere else.</li></ul></td></tr></tbody></table> *class *werkzeug.contrib.fixers.PathInfoFromRequestUriFix(*app*) On windows environment variables are limited to the system charsetwhich makes it impossible to store the PATH_INFO variable in theenvironment without loss of information on some systems. This is for example a problem for CGI scripts on a Windows Apache. This fixer works by recreating the PATH_INFO from REQUEST_URI,REQUEST_URL, or UNENCODED_URL (whatever is available). Thus thefix can only be applied if the webserver supports either of thesevariables. | 参数: | **app** – the WSGI application | |-----|-----| *class *werkzeug.contrib.fixers.ProxyFix(*app*, *num_proxies=1*) This middleware can be applied to add HTTP proxy support to anapplication that was not designed with HTTP proxies in mind. Itsets REMOTE_ADDR, HTTP_HOST from X-Forwarded headers. If you have more than one proxy server in front of your app, setnum_proxies accordingly. Do not use this middleware in non-proxy setups for security reasons. The original values of REMOTE_ADDR and HTTP_HOST are stored inthe WSGI environment as werkzeug.proxy_fix.orig_remote_addr andwerkzeug.proxy_fix.orig_http_host. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>app</strong> – the WSGI application</li><li><strong>num_proxies</strong> – the number of proxy servers in front of the app.</li></ul></td></tr></tbody></table> get_remote_addr(*forwarded_for*) Selects the new remote addr from the given list of ips inX-Forwarded-For. By default it picks the one that the num_proxiesproxy server provides. Before 0.9 it would always pick the first. 0.8 新版功能. *class *werkzeug.contrib.fixers.HeaderRewriterFix(*app*, *remove_headers=None*, *add_headers=None*) This middleware can remove response headers and add others. Thisis for example useful to remove the Date header from responses if youare using a server that adds that header, no matter if it's present ornot or to add X-Powered-By headers: ~~~ app = HeaderRewriterFix(app, remove_headers=['Date'], add_headers=[('X-Powered-By', 'WSGI')]) ~~~ <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>app</strong> – the WSGI application</li><li><strong>remove_headers</strong> – a sequence of header keys that should beremoved.</li><li><strong>add_headers</strong> – a sequence of <tt class="docutils literal"><span class="pre">(key,</span> <span class="pre">value)</span></tt> tuples that shouldbe added.</li></ul></td></tr></tbody></table> *class *werkzeug.contrib.fixers.InternetExplorerFix(*app*, *fix_vary=True*, *fix_attach=True*) This middleware fixes a couple of bugs with Microsoft InternetExplorer. Currently the following fixes are applied: - removing of Vary headers for unsupported mimetypes whichcauses troubles with caching. Can be disabled by passingfix_vary=False to the constructor.see: [http://support.microsoft.com/kb/824847/en-us](http://support.microsoft.com/kb/824847/en-us) - removes offending headers to work around caching bugs inInternet Explorer if Content-Disposition is set. Can bedisabled by passing fix_attach=False to the constructor. If it does not detect affected Internet Explorer versions it won't touchthe request / response.
';

Iter IO

最后更新于:2022-04-01 04:08:06

# Iter IO This module implements a [IterIO](# "werkzeug.contrib.iterio.IterIO") that converts an iterator intoa stream object and the other way round. Converting streams intoiterators requires the [greenlet](http://codespeak.net/py/dist/greenlet.html) module. To convert an iterator into a stream all you have to do is to pass itdirectly to the [IterIO](# "werkzeug.contrib.iterio.IterIO") constructor. In this example we pass ita newly created generator: ~~~ def foo(): yield "something\n" yield "otherthings" stream = IterIO(foo()) print stream.read() # read the whole iterator ~~~ The other way round works a bit different because we have to ensure thatthe code execution doesn't take place yet. An [IterIO](# "werkzeug.contrib.iterio.IterIO") call with acallable as first argument does two things. The function itself is passedan [IterIO](# "werkzeug.contrib.iterio.IterIO") stream it can feed. The object returned by the[IterIO](# "werkzeug.contrib.iterio.IterIO") constructor on the other hand is not an stream object butan iterator: ~~~ def foo(stream): stream.write("some") stream.write("thing") stream.flush() stream.write("otherthing") iterator = IterIO(foo) print iterator.next() # prints something print iterator.next() # prints otherthing iterator.next() # raises StopIteration ~~~ *class *werkzeug.contrib.iterio.IterIO Instances of this object implement an interface compatible with thestandard Python file object. Streams are either read-only orwrite-only depending on how the object is created. If the first argument is an iterable a file like object is returned thatreturns the contents of the iterable. In case the iterable is emptyread operations will return the sentinel value. If the first argument is a callable then the stream object will becreated and passed to that function. The caller itself however willnot receive a stream but an iterable. The function will be be executedstep by step as something iterates over the returned iterable. Eachcall to flush() will create an item for the iterable. Ifflush() is called without any writes in-between the sentinelvalue will be yielded. Note for Python 3: due to the incompatible interface of bytes andstreams you should set the sentinel value explicitly to an emptybytestring (b'') if you are expecting to deal with bytes asotherwise the end of the stream is marked with the wrong sentinelvalue. 0.9 新版功能: sentinel parameter was added.
';

Extra Wrappers

最后更新于:2022-04-01 04:08:04

# Extra Wrappers Extra wrappers or mixins contributed by the community. These wrappers canbe mixed in into request objects to add extra functionality. Example: ~~~ from werkzeug.wrappers import Request as RequestBase from werkzeug.contrib.wrappers import JSONRequestMixin class Request(RequestBase, JSONRequestMixin): pass ~~~ Afterwards this request object provides the extra functionality of the[JSONRequestMixin](# "werkzeug.contrib.wrappers.JSONRequestMixin"). *class *werkzeug.contrib.wrappers.JSONRequestMixin Add json method to a request object. This will parse the input datathrough simplejson if possible. [BadRequest](# "werkzeug.exceptions.BadRequest") will be raised if the content-typeis not json or if the data itself cannot be parsed as json. json Get the result of simplejson.loads if possible. *class *werkzeug.contrib.wrappers.ProtobufRequestMixin Add protobuf parsing method to a request object. This will parse theinput data through [protobuf](http://code.google.com/p/protobuf/) [http://code.google.com/p/protobuf/] if possible. [BadRequest](# "werkzeug.exceptions.BadRequest") will be raised if the content-typeis not protobuf or if the data itself cannot be parsed property. parse_protobuf(*proto_type*) Parse the data into an instance of proto_type. protobuf_check_initialization* = True* by default the [ProtobufRequestMixin](# "werkzeug.contrib.wrappers.ProtobufRequestMixin") will raise a[BadRequest](# "werkzeug.exceptions.BadRequest") if the object is notinitialized. You can bypass that check by setting thisattribute to False. *class *werkzeug.contrib.wrappers.RoutingArgsRequestMixin This request mixin adds support for the wsgiorg routing args[specification](http://www.wsgi.org/wsgi/Specifications/routing_args) [http://www.wsgi.org/wsgi/Specifications/routing_args]. routing_args The positional URL arguments as tuple. routing_vars The keyword URL arguments as dict. *class *werkzeug.contrib.wrappers.ReverseSlashBehaviorRequestMixin This mixin reverses the trailing slash behavior of [script_root](# "werkzeug.contrib.wrappers.ReverseSlashBehaviorRequestMixin.script_root")and [path](# "werkzeug.contrib.wrappers.ReverseSlashBehaviorRequestMixin.path"). This makes it possible to use urljoin()directly on the paths. Because it changes the behavior or Request this class has to bemixed in *before* the actual request class: ~~~ class MyRequest(ReverseSlashBehaviorRequestMixin, Request): pass ~~~ This example shows the differences (for an application mounted on/application and the request going to /application/foo/bar): > |   | normal behavior | reverse behavior | |-----|-----|-----| | script_root | /application | /application/ | | path | /foo/bar | foo/bar | path Requested path as unicode. This works a bit like the regular pathinfo in the WSGI environment but will not include a leading slash. script_root The root path of the script includling a trailing slash. *class *werkzeug.contrib.wrappers.DynamicCharsetRequestMixin “If this mixin is mixed into a request class it will providea dynamic charset attribute. This means that if the charset istransmitted in the content type headers it's used from there. Because it changes the behavior or Request this class hasto be mixed in *before* the actual request class: ~~~ class MyRequest(DynamicCharsetRequestMixin, Request): pass ~~~ By default the request object assumes that the URL charset is thesame as the data charset. If the charset varies on each requestbased on the transmitted data it's not a good idea to let the URLschange based on that. Most browsers assume either utf-8 or latin1for the URLs if they have troubles figuring out. It's stronglyrecommended to set the URL charset to utf-8: ~~~ class MyRequest(DynamicCharsetRequestMixin, Request): url_charset = 'utf-8' ~~~ 0.6 新版功能. charset The charset from the content type. default_charset* = 'latin1'* the default charset that is assumed if the content type headeris missing or does not contain a charset parameter. The defaultis latin1 which is what HTTP specifies as default charset.You may however want to set this to utf-8 to better supportbrowsers that do not transmit a charset for incoming data. unknown_charset(*charset*) Called if a charset was provided but is not supported bythe Python codecs module. By default latin1 is assumed thento not lose any information, you may override this method tochange the behavior. | 参数: | **charset** – the charset that was not found. | |-----|-----| | 返回: | the replacement charset. | *class *werkzeug.contrib.wrappers.DynamicCharsetResponseMixin If this mixin is mixed into a response class it will providea dynamic charset attribute. This means that if the charset islooked up and stored in the Content-Type header and updatesitself automatically. This also means a small performance hit butcan be useful if you're working with different charsets onresponses. Because the charset attribute is no a property at class-level, thedefault value is stored in default_charset. Because it changes the behavior or Response this class hasto be mixed in *before* the actual response class: ~~~ class MyResponse(DynamicCharsetResponseMixin, Response): pass ~~~ 0.6 新版功能. charset The charset for the response. It's stored inside theContent-Type header as a parameter. default_charset* = 'utf-8'* the default charset.
';

Cache

最后更新于:2022-04-01 04:08:02

# Cache The main problem with dynamic Web sites is, well, they're dynamic. Eachtime a user requests a page, the webserver executes a lot of code, queriesthe database, renders templates until the visitor gets the page he sees. This is a lot more expensive than just loading a file from the file systemand sending it to the visitor. For most Web applications, this overhead isn't a big deal but once itbecomes, you will be glad to have a cache system in place. ### How Caching Works Caching is pretty simple. Basically you have a cache object lurking aroundsomewhere that is connected to a remote cache or the file system orsomething else. When the request comes in you check if the current pageis already in the cache and if so, you're returning it from the cache.Otherwise you generate the page and put it into the cache. (Or a fragmentof the page, you don't have to cache the full thing) Here is a simple example of how to cache a sidebar for a template: ~~~ def get_sidebar(user): identifier = 'sidebar_for/user%d' % user.id value = cache.get(identifier) if value is not None: return value value = generate_sidebar_for(user=user) cache.set(identifier, value, timeout=60 * 5) return value ~~~ ### Creating a Cache Object To create a cache object you just import the cache system of your choicefrom the cache module and instantiate it. Then you can start workingwith that object: ~~~ >>> from werkzeug.contrib.cache import SimpleCache >>> c = SimpleCache() >>> c.set("foo", "value") >>> c.get("foo") 'value' >>> c.get("missing") is None True ~~~ Please keep in mind that you have to create the cache and put it somewhereyou have access to it (either as a module global you can import or you justput it into your WSGI application). ### Cache System API *class *werkzeug.contrib.cache.BaseCache(*default_timeout=300*) Baseclass for the cache systems. All the cache systems implement thisAPI or a superset of it. | 参数: | **default_timeout** – the default timeout that is used if no timeout isspecified on [set()](# "werkzeug.contrib.cache.BaseCache.set"). | |-----|-----| add(*key*, *value*, *timeout=None*) Works like [set()](# "werkzeug.contrib.cache.BaseCache.set") but does not overwrite the values of alreadyexisting keys. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>key</strong> – the key to set</li><li><strong>value</strong> – the value for the key</li><li><strong>timeout</strong> – the cache timeout for the key or the defaulttimeout if not specified.</li></ul></td></tr></tbody></table> clear() Clears the cache. Keep in mind that not all caches supportcompletely clearing the cache. dec(*key*, *delta=1*) Decrements the value of a key by delta. If the key doesnot yet exist it is initialized with -delta. For supporting caches this is an atomic operation. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>key</strong> – the key to increment.</li><li><strong>delta</strong> – the delta to subtract.</li></ul></td></tr></tbody></table> delete(*key*) Deletes key from the cache. If it does not exist in the cachenothing happens. | 参数: | **key** – the key to delete. | |-----|-----| delete_many(**keys*) Deletes multiple keys at once. | 参数: | **keys** – The function accepts multiple keys as positionalarguments. | |-----|-----| get(*key*) Looks up key in the cache and returns the value for it.If the key does not exist None is returned instead. | 参数: | **key** – the key to be looked up. | |-----|-----| get_dict(**keys*) Works like [get_many()](# "werkzeug.contrib.cache.BaseCache.get_many") but returns a dict: ~~~ d = cache.get_dict("foo", "bar") foo = d["foo"] bar = d["bar"] ~~~ | 参数: | **keys** – The function accepts multiple keys as positionalarguments. | |-----|-----| get_many(**keys*) Returns a list of values for the given keys.For each key a item in the list is created. Example: ~~~ foo, bar = cache.get_many("foo", "bar") ~~~ If a key can't be looked up None is returned for that keyinstead. | 参数: | **keys** – The function accepts multiple keys as positionalarguments. | |-----|-----| inc(*key*, *delta=1*) Increments the value of a key by delta. If the key doesnot yet exist it is initialized with delta. For supporting caches this is an atomic operation. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>key</strong> – the key to increment.</li><li><strong>delta</strong> – the delta to add.</li></ul></td></tr></tbody></table> set(*key*, *value*, *timeout=None*) Adds a new key/value to the cache (overwrites value, if key alreadyexists in the cache). <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>key</strong> – the key to set</li><li><strong>value</strong> – the value for the key</li><li><strong>timeout</strong> – the cache timeout for the key (if not specified,it uses the default timeout).</li></ul></td></tr></tbody></table> set_many(*mapping*, *timeout=None*) Sets multiple keys and values from a mapping. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>mapping</strong> – a mapping with the keys/values to set.</li><li><strong>timeout</strong> – the cache timeout for the key (if not specified,it uses the default timeout).</li></ul></td></tr></tbody></table> ### Cache Systems *class *werkzeug.contrib.cache.NullCache(*default_timeout=300*) A cache that doesn't cache. This can be useful for unit testing. | 参数: | **default_timeout** – a dummy parameter that is ignored but existsfor API compatibility with other caches. | |-----|-----| *class *werkzeug.contrib.cache.SimpleCache(*threshold=500*, *default_timeout=300*) Simple memory cache for single process environments. This class existsmainly for the development server and is not 100% thread safe. It triesto use as many atomic operations as possible and no locks for simplicitybut it could happen under heavy load that keys are added multiple times. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>threshold</strong> – the maximum number of items the cache stores beforeit starts deleting some.</li><li><strong>default_timeout</strong> – the default timeout that is used if no timeout isspecified on <a class="reference internal" href="#werkzeug.contrib.cache.BaseCache.set" title="werkzeug.contrib.cache.BaseCache.set"><tt class="xref py py-meth docutils literal"><span class="pre">set()</span></tt></a>.</li></ul></td></tr></tbody></table> *class *werkzeug.contrib.cache.MemcachedCache(*servers=None*, *default_timeout=300*, *key_prefix=None*) A cache that uses memcached as backend. The first argument can either be an object that resembles the API of amemcache.Client or a tuple/list of server addresses. In theevent that a tuple/list is passed, Werkzeug tries to import the bestavailable memcache library. Implementation notes: This cache backend works around some limitations inmemcached to simplify the interface. For example unicode keys are encodedto utf-8 on the fly. Methods such as [get_dict()](# "werkzeug.contrib.cache.BaseCache.get_dict") returnthe keys in the same format as passed. Furthermore all get methodssilently ignore key errors to not cause problems when untrusted user datais passed to the get methods which is often the case in web applications. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>servers</strong> – a list or tuple of server addresses or alternativelya <tt class="xref py py-class docutils literal"><span class="pre">memcache.Client</span></tt> or a compatible client.</li><li><strong>default_timeout</strong> – the default timeout that is used if no timeout isspecified on <a class="reference internal" href="#werkzeug.contrib.cache.BaseCache.set" title="werkzeug.contrib.cache.BaseCache.set"><tt class="xref py py-meth docutils literal"><span class="pre">set()</span></tt></a>.</li><li><strong>key_prefix</strong> – a prefix that is added before all keys. This makes itpossible to use the same memcached server for differentapplications. Keep in mind that<a class="reference internal" href="#werkzeug.contrib.cache.BaseCache.clear" title="werkzeug.contrib.cache.BaseCache.clear"><tt class="xref py py-meth docutils literal"><span class="pre">clear()</span></tt></a> will also clear keys with adifferent prefix.</li></ul></td></tr></tbody></table> *class *werkzeug.contrib.cache.GAEMemcachedCache This class is deprecated in favour of [MemcachedCache](# "werkzeug.contrib.cache.MemcachedCache") whichnow supports Google Appengine as well. 在 0.8 版更改: Deprecated in favour of [MemcachedCache](# "werkzeug.contrib.cache.MemcachedCache"). *class *werkzeug.contrib.cache.RedisCache(*host='localhost'*, *port=6379*, *password=None*, *db=0*, *default_timeout=300*, *key_prefix=None*) Uses the Redis key-value store as a cache backend. The first argument can be either a string denoting address of the Redisserver or an object resembling an instance of a redis.Redis class. Note: Python Redis API already takes care of encoding unicode strings onthe fly. 0.7 新版功能. 0.8 新版功能: key_prefix was added. 在 0.8 版更改: This cache backend now properly serializes objects. 在 0.8.3 版更改: This cache backend now supports password authentication. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>host</strong> – address of the Redis server or an object which API iscompatible with the official Python Redis client (redis-py).</li><li><strong>port</strong> – port number on which Redis server listens for connections.</li><li><strong>password</strong> – password authentication for the Redis server.</li><li><strong>db</strong> – db (zero-based numeric index) on Redis Server to connect.</li><li><strong>default_timeout</strong> – the default timeout that is used if no timeout isspecified on <a class="reference internal" href="#werkzeug.contrib.cache.BaseCache.set" title="werkzeug.contrib.cache.BaseCache.set"><tt class="xref py py-meth docutils literal"><span class="pre">set()</span></tt></a>.</li><li><strong>key_prefix</strong> – A prefix that should be added to all keys.</li></ul></td></tr></tbody></table> *class *werkzeug.contrib.cache.FileSystemCache(*cache_dir*, *threshold=500*, *default_timeout=300*, *mode=384*) A cache that stores the items on the file system. This cache dependson being the only user of the cache_dir. Make absolutely sure thatnobody but this cache stores files there or otherwise the cache willrandomly delete files therein. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>cache_dir</strong> – the directory where cache files are stored.</li><li><strong>threshold</strong> – the maximum number of items the cache stores beforeit starts deleting some.</li><li><strong>default_timeout</strong> – the default timeout that is used if no timeout isspecified on <a class="reference internal" href="#werkzeug.contrib.cache.BaseCache.set" title="werkzeug.contrib.cache.BaseCache.set"><tt class="xref py py-meth docutils literal"><span class="pre">set()</span></tt></a>.</li><li><strong>mode</strong> – the file mode wanted for the cache files, default 0600</li></ul></td></tr></tbody></table>
';

Secure Cookie

最后更新于:2022-04-01 04:07:59

# Secure Cookie This module implements a cookie that is not alterable from the clientbecause it adds a checksum the server checks for. You can use it assession replacement if all you have is a user id or something to marka logged in user. Keep in mind that the data is still readable from the client as anormal cookie is. However you don't have to store and flush thesessions you have at the server. Example usage: ~~~ >>> from werkzeug.contrib.securecookie import SecureCookie >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef") ~~~ Dumping into a string so that one can store it in a cookie: ~~~ >>> value = x.serialize() ~~~ Loading from that string again: ~~~ >>> x = SecureCookie.unserialize(value, "deadbeef") >>> x["baz"] (1, 2, 3) ~~~ If someone modifies the cookie and the checksum is wrong the unserializemethod will fail silently and return a new empty SecureCookie object. Keep in mind that the values will be visible in the cookie so do notstore data in a cookie you don't want the user to see. ### Application Integration If you are using the werkzeug request objects you could integrate thesecure cookie into your application like this: ~~~ from werkzeug.utils import cached_property from werkzeug.wrappers import BaseRequest from werkzeug.contrib.securecookie import SecureCookie # don't use this key but a different one; you could just use # os.urandom(20) to get something random SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea' class Request(BaseRequest): @cached_property def client_session(self): data = self.cookies.get('session_data') if not data: return SecureCookie(secret_key=SECRET_KEY) return SecureCookie.unserialize(data, SECRET_KEY) def application(environ, start_response): request = Request(environ, start_response) # get a response object here response = ... if request.client_session.should_save: session_data = request.client_session.serialize() response.set_cookie('session_data', session_data, httponly=True) return response(environ, start_response) ~~~ A less verbose integration can be achieved by using shorthand methods: ~~~ class Request(BaseRequest): @cached_property def client_session(self): return SecureCookie.load_cookie(self, secret_key=COOKIE_SECRET) def application(environ, start_response): request = Request(environ, start_response) # get a response object here response = ... request.client_session.save_cookie(response) return response(environ, start_response) ~~~ ### Security The default implementation uses Pickle as this is the only module thatused to be available in the standard library when this module was created.If you have simplejson available it's strongly recommended to create asubclass and replace the serialization method: ~~~ import json from werkzeug.contrib.securecookie import SecureCookie class JSONSecureCookie(SecureCookie): serialization_method = json ~~~ The weakness of Pickle is that if someone gains access to the secret keythe attacker can not only modify the session but also execute arbitrarycode on the server. ### Reference *class *werkzeug.contrib.securecookie.SecureCookie(*data=None*, *secret_key=None*, *new=True*) Represents a secure cookie. You can subclass this class and providean alternative mac method. The import thing is that the mac methodis a function with a similar interface to the hashlib. Requiredmethods are update() and digest(). Example usage: ~~~ >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef") >>> x["foo"] 42 >>> x["baz"] (1, 2, 3) >>> x["blafasel"] = 23 >>> x.should_save True ~~~ <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>data</strong> – the initial data. Either a dict, list of tuples or <cite>None</cite>.</li><li><strong>secret_key</strong> – the secret key. If not set <cite>None</cite> or not specifiedit has to be set before <a class="reference internal" href="#werkzeug.contrib.securecookie.SecureCookie.serialize" title="werkzeug.contrib.securecookie.SecureCookie.serialize"><tt class="xref py py-meth docutils literal"><span class="pre">serialize()</span></tt></a> is called.</li><li><strong>new</strong> – The initial value of the <cite>new</cite> flag.</li></ul></td></tr></tbody></table> new True if the cookie was newly created, otherwise False modified Whenever an item on the cookie is set, this attribute is set to True.However this does not track modifications inside mutable objectsin the cookie: ~~~ >>> c = SecureCookie() >>> c["foo"] = [1, 2, 3] >>> c.modified True >>> c.modified = False >>> c["foo"].append(4) >>> c.modified False ~~~ In that situation it has to be set to modified by hand so that[should_save](# "werkzeug.contrib.securecookie.SecureCookie.should_save") can pick it up. hash_method() The hash method to use. This has to be a module with a new functionor a function that creates a hashlib object. Such as hashlib.md5Subclasses can override this attribute. The default hash is sha1.Make sure to wrap this in staticmethod() if you store an arbitraryfunction there such as hashlib.sha1 which might be implementedas a function. *classmethod *load_cookie(*request*, *key='session'*, *secret_key=None*) Loads a [SecureCookie](# "werkzeug.contrib.securecookie.SecureCookie") from a cookie in request. If thecookie is not set, a new [SecureCookie](# "werkzeug.contrib.securecookie.SecureCookie") instanced isreturned. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>request</strong> – a request object that has a <cite>cookies</cite> attributewhich is a dict of all cookie values.</li><li><strong>key</strong> – the name of the cookie.</li><li><strong>secret_key</strong> – the secret key used to unquote the cookie.Always provide the value even though it hasno default!</li></ul></td></tr></tbody></table> *classmethod *quote(*value*) Quote the value for the cookie. This can be any object supportedby [serialization_method](# "werkzeug.contrib.securecookie.SecureCookie.serialization_method"). | 参数: | **value** – the value to quote. | |-----|-----| quote_base64* = True* if the contents should be base64 quoted. This can be disabled if theserialization process returns cookie safe strings only. save_cookie(*response*, *key='session'*, *expires=None*, *session_expires=None*, *max_age=None*, *path='/'*, *domain=None*, *secure=None*, *httponly=False*, *force=False*) Saves the SecureCookie in a cookie on response object. Allparameters that are not described here are forwarded directlyto set_cookie(). <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>response</strong> – a response object that has a<tt class="xref py py-meth docutils literal"><span class="pre">set_cookie()</span></tt> method.</li><li><strong>key</strong> – the name of the cookie.</li><li><strong>session_expires</strong> – the expiration date of the secure cookiestored information. If this is not providedthe cookie <cite>expires</cite> date is used instead.</li></ul></td></tr></tbody></table> serialization_method* = <module 'pickle' from '/usr/lib/python2.7/pickle.pyc'>* the module used for serialization. Unless overriden by subclassesthe standard pickle module is used. serialize(*expires=None*) Serialize the secure cookie into a string. If expires is provided, the session will be automatically invalidatedafter expiration when you unseralize it. This provides betterprotection against session cookie theft. | 参数: | **expires** – an optional expiration date for the cookie (a[datetime.datetime](http://docs.python.org/dev/library/datetime.html#datetime.datetime "(在 Python v3.5)") [http://docs.python.org/dev/library/datetime.html#datetime.datetime] object) | |-----|-----| should_save True if the session should be saved. By default this is only truefor [modified](# "werkzeug.contrib.securecookie.SecureCookie.modified") cookies, not [new](# "werkzeug.contrib.securecookie.SecureCookie.new"). *classmethod *unquote(*value*) Unquote the value for the cookie. If unquoting does not work a[UnquoteError](# "werkzeug.contrib.securecookie.UnquoteError") is raised. | 参数: | **value** – the value to unquote. | |-----|-----| *classmethod *unserialize(*string*, *secret_key*) Load the secure cookie from a serialized string. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first simple"><li><strong>string</strong> – the cookie value to unserialize.</li><li><strong>secret_key</strong> – the secret key used to serialize the cookie.</li></ul></td></tr><tr class="field-even field"><th class="field-name">返回:</th><td class="field-body"><p class="first last">a new <a class="reference internal" href="#werkzeug.contrib.securecookie.SecureCookie" title="werkzeug.contrib.securecookie.SecureCookie"><tt class="xref py py-class docutils literal"><span class="pre">SecureCookie</span></tt></a>.</p></td></tr></tbody></table> *exception *werkzeug.contrib.securecookie.UnquoteError Internal exception used to signal failures on quoting.
';

Sessions

最后更新于:2022-04-01 04:07:57

# Sessions This module contains some helper classes that help one to add sessionsupport to a python WSGI application. For full client-side sessionstorage see [securecookie](# "werkzeug.contrib.securecookie") which implements asecure, client-side session storage. ### Application Integration ~~~ from werkzeug.contrib.sessions import SessionMiddleware, \ FilesystemSessionStore app = SessionMiddleware(app, FilesystemSessionStore()) ~~~ The current session will then appear in the WSGI environment aswerkzeug.session. However it's recommended to not use the middlewarebut the stores directly in the application. However for very simplescripts a middleware for sessions could be sufficient. This module does not implement methods or ways to check if a session isexpired. That should be done by a cronjob and storage specific. Forexample to prune unused filesystem sessions one could check the modifiedtime of the files. It sessions are stored in the database the new()method should add an expiration timestamp for the session. For better flexibility it's recommended to not use the middleware but thestore and session object directly in the application dispatching: ~~~ session_store = FilesystemSessionStore() def application(environ, start_response): request = Request(environ) sid = request.cookies.get('cookie_name') if sid is None: request.session = session_store.new() else: request.session = session_store.get(sid) response = get_the_response_object(request) if request.session.should_save: session_store.save(request.session) response.set_cookie('cookie_name', request.session.sid) return response(environ, start_response) ~~~ ### Reference *class *werkzeug.contrib.sessions.Session(*data*, *sid*, *new=False*) Subclass of a dict that keeps track of direct object changes. Changesin mutable structures are not tracked, for those you have to setmodified to True by hand. sid The session ID as string. new True is the cookie was newly created, otherwise False modified Whenever an item on the cookie is set, this attribute is set to True.However this does not track modifications inside mutable objectsin the session: ~~~ >>> c = Session({}, sid='deadbeefbabe2c00ffee') >>> c["foo"] = [1, 2, 3] >>> c.modified True >>> c.modified = False >>> c["foo"].append(4) >>> c.modified False ~~~ In that situation it has to be set to modified by hand so that[should_save](# "werkzeug.contrib.sessions.Session.should_save") can pick it up. should_save True if the session should be saved. 在 0.6 版更改: By default the session is now only saved if the session ismodified, not if it is new like it was before. *class *werkzeug.contrib.sessions.SessionStore(*session_class=None*) Baseclass for all session stores. The Werkzeug contrib module does notimplement any useful stores besides the filesystem store, applicationdevelopers are encouraged to create their own stores. | 参数: | **session_class** – The session class to use. Defaults to[Session](# "werkzeug.contrib.sessions.Session"). | |-----|-----| delete(*session*) Delete a session. generate_key(*salt=None*) Simple function that generates a new session key. get(*sid*) Get a session for this sid or a new session object. This methodhas to check if the session key is valid and create a new session ifthat wasn't the case. is_valid_key(*key*) Check if a key has the correct format. new() Generate a new session. save(*session*) Save a session. save_if_modified(*session*) Save if a session class wants an update. *class *werkzeug.contrib.sessions.FilesystemSessionStore(*path=None*, *filename_template='werkzeug_%s.sess'*, *session_class=None*, *renew_missing=False*, *mode=420*) Simple example session store that saves sessions on the filesystem.This store works best on POSIX systems and Windows Vista / WindowsServer 2008 and newer. 在 0.6 版更改: renew_missing was added. Previously this was considered True,now the default changed to False and it can be explicitlydeactivated. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>path</strong> – the path to the folder used for storing the sessions.If not provided the default temporary directory is used.</li><li><strong>filename_template</strong> – a string template used to give the sessiona filename. <tt class="docutils literal"><span class="pre">%s</span></tt> is replaced with thesession id.</li><li><strong>session_class</strong> – The session class to use. Defaults to<a class="reference internal" href="#werkzeug.contrib.sessions.Session" title="werkzeug.contrib.sessions.Session"><tt class="xref py py-class docutils literal"><span class="pre">Session</span></tt></a>.</li><li><strong>renew_missing</strong> – set to <cite>True</cite> if you want the store togive the user a new sid if the session wasnot yet saved.</li></ul></td></tr></tbody></table> list() Lists all sessions in the store. 0.6 新版功能. *class *werkzeug.contrib.sessions.SessionMiddleware(*app*, *store*, *cookie_name='session_id'*, *cookie_age=None*, *cookie_expires=None*, *cookie_path='/'*, *cookie_domain=None*, *cookie_secure=None*, *cookie_httponly=False*, *environ_key='werkzeug.session'*) A simple middleware that puts the session object of a store providedinto the WSGI environ. It automatically sets cookies and restoressessions. However a middleware is not the preferred solution because it won't be asfast as sessions managed by the application itself and will put a key intothe WSGI environment only relevant for the application which is againstthe concept of WSGI. The cookie parameters are the same as for the dump_cookie()function just prefixed with cookie_. Additionally max_age iscalled cookie_age and not cookie_max_age because of backwardscompatibility.
';

Atom Syndication

最后更新于:2022-04-01 04:07:55

# Atom Syndication This module provides a class called [AtomFeed](# "werkzeug.contrib.atom.AtomFeed") which can beused to generate feeds in the Atom syndication format (see [**RFC 4287**](http://tools.ietf.org/html/rfc4287.html) [http://tools.ietf.org/html/rfc4287.html]). Example: ~~~ def atom_feed(request): feed = AtomFeed("My Blog", feed_url=request.url, url=request.host_url, subtitle="My example blog for a feed test.") for post in Post.query.limit(10).all(): feed.add(post.title, post.body, content_type='html', author=post.author, url=post.url, id=post.uid, updated=post.last_update, published=post.pub_date) return feed.get_response() ~~~ *class *werkzeug.contrib.atom.AtomFeed(*title=None*, *entries=None*, ***kwargs*) A helper class that creates Atom feeds. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>title</strong> – the title of the feed. Required.</li><li><strong>title_type</strong> – the type attribute for the title element. One of<tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>.</li><li><strong>url</strong> – the url for the feed (not the url <em>of</em> the feed)</li><li><strong>id</strong> – a globally unique id for the feed. Must be an URI. Ifnot present the <cite>feed_url</cite> is used, but one of both isrequired.</li><li><strong>updated</strong> – the time the feed was modified the last time. Mustbe a <a class="reference external" href="http://docs.python.org/dev/library/datetime.html#datetime.datetime" title="(在 Python v3.5)"><tt class="xref py py-class docutils literal"><span class="pre">datetime.datetime</span></tt></a><span class="link-target"> [http://docs.python.org/dev/library/datetime.html#datetime.datetime]</span> object. If notpresent the latest entry's <cite>updated</cite> is used.</li><li><strong>feed_url</strong> – the URL to the feed. Should be the URL that wasrequested.</li><li><strong>author</strong> – the author of the feed. Must be either a string (thename) or a dict with name (required) and uri oremail (both optional). Can be a list of (may bemixed, too) strings and dicts, too, if there aremultiple authors. Required if not every entry has anauthor element.</li><li><strong>icon</strong> – an icon for the feed.</li><li><strong>logo</strong> – a logo for the feed.</li><li><strong>rights</strong> – copyright information for the feed.</li><li><strong>rights_type</strong> – the type attribute for the rights element. One of<tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>. Default is<tt class="docutils literal"><span class="pre">'text'</span></tt>.</li><li><strong>subtitle</strong> – a short description of the feed.</li><li><strong>subtitle_type</strong> – the type attribute for the subtitle element.One of <tt class="docutils literal"><span class="pre">'text'</span></tt>, <tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt>or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>. Default is <tt class="docutils literal"><span class="pre">'text'</span></tt>.</li><li><strong>links</strong> – additional links. Must be a list of dictionaries withhref (required) and rel, type, hreflang, title, length(all optional)</li><li><strong>generator</strong> – the software that generated this feed. This must bea tuple in the form <tt class="docutils literal"><span class="pre">(name,</span> <span class="pre">url,</span> <span class="pre">version)</span></tt>. Ifyou don't want to specify one of them, set the itemto <cite>None</cite>.</li><li><strong>entries</strong> – a list with the entries for the feed. Entries can alsobe added later with <a class="reference internal" href="#werkzeug.contrib.atom.AtomFeed.add" title="werkzeug.contrib.atom.AtomFeed.add"><tt class="xref py py-meth docutils literal"><span class="pre">add()</span></tt></a>.</li></ul></td></tr></tbody></table> For more information on the elements see[http://www.atomenabled.org/developers/syndication/](http://www.atomenabled.org/developers/syndication/) Everywhere where a list is demanded, any iterable can be used. add(**args*, ***kwargs*) Add a new entry to the feed. This function can either be calledwith a [FeedEntry](# "werkzeug.contrib.atom.FeedEntry") or some keyword and positional argumentsthat are forwarded to the [FeedEntry](# "werkzeug.contrib.atom.FeedEntry") constructor. generate() Return a generator that yields pieces of XML. get_response() Return a response object for the feed. to_string() Convert the feed into a string. *class *werkzeug.contrib.atom.FeedEntry(*title=None*, *content=None*, *feed_url=None*, ***kwargs*) Represents a single entry in a feed. <table class="docutils field-list" frame="void" rules="none"><col class="field-name"/><col class="field-body"/><tbody valign="top"><tr class="field-odd field"><th class="field-name">参数:</th><td class="field-body"><ul class="first last simple"><li><strong>title</strong> – the title of the entry. Required.</li><li><strong>title_type</strong> – the type attribute for the title element. One of<tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>.</li><li><strong>content</strong> – the content of the entry.</li><li><strong>content_type</strong> – the type attribute for the content element. Oneof <tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>.</li><li><strong>summary</strong> – a summary of the entry's content.</li><li><strong>summary_type</strong> – the type attribute for the summary element. Oneof <tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>.</li><li><strong>url</strong> – the url for the entry.</li><li><strong>id</strong> – a globally unique id for the entry. Must be an URI. Ifnot present the URL is used, but one of both is required.</li><li><strong>updated</strong> – the time the entry was modified the last time. Mustbe a <a class="reference external" href="http://docs.python.org/dev/library/datetime.html#datetime.datetime" title="(在 Python v3.5)"><tt class="xref py py-class docutils literal"><span class="pre">datetime.datetime</span></tt></a><span class="link-target"> [http://docs.python.org/dev/library/datetime.html#datetime.datetime]</span> object. Required.</li><li><strong>author</strong> – the author of the entry. Must be either a string (thename) or a dict with name (required) and uri oremail (both optional). Can be a list of (may bemixed, too) strings and dicts, too, if there aremultiple authors. Required if the feed does not have anauthor element.</li><li><strong>published</strong> – the time the entry was initially published. Mustbe a <a class="reference external" href="http://docs.python.org/dev/library/datetime.html#datetime.datetime" title="(在 Python v3.5)"><tt class="xref py py-class docutils literal"><span class="pre">datetime.datetime</span></tt></a><span class="link-target"> [http://docs.python.org/dev/library/datetime.html#datetime.datetime]</span> object.</li><li><strong>rights</strong> – copyright information for the entry.</li><li><strong>rights_type</strong> – the type attribute for the rights element. One of<tt class="docutils literal"><span class="pre">'html'</span></tt>, <tt class="docutils literal"><span class="pre">'text'</span></tt> or <tt class="docutils literal"><span class="pre">'xhtml'</span></tt>. Default is<tt class="docutils literal"><span class="pre">'text'</span></tt>.</li><li><strong>links</strong> – additional links. Must be a list of dictionaries withhref (required) and rel, type, hreflang, title, length(all optional)</li><li><strong>categories</strong> – categories for the entry. Must be a list of dictionarieswith term (required), scheme and label (all optional)</li><li><strong>xml_base</strong> – The xml base (url) for this feed item. If not providedit will default to the item url.</li></ul></td></tr></tbody></table> For more information on the elements see[http://www.atomenabled.org/developers/syndication/](http://www.atomenabled.org/developers/syndication/) Everywhere where a list is demanded, any iterable can be used.
';

贡献模块

最后更新于:2022-04-01 04:07:52

# Contributed Modules A lot of useful code contributed by the community is shipped with Werkzeugas part of the contrib module: - [Atom Syndication](#) - [Sessions](#) - [Application Integration](#) - [Reference](#) - [Secure Cookie](#) - [Application Integration](#) - [Security](#) - [Reference](#) - [Cache](#) - [How Caching Works](#) - [Creating a Cache Object](#) - [Cache System API](#) - [Cache Systems](#) - [Extra Wrappers](#) - [Iter IO](#) - [Fixers](#) - [WSGI Application Profiler](#) - [Lint Validation Middleware](#)
';

HTTP Proxying

最后更新于:2022-04-01 04:07:50

# HTTP Proxying Many people prefer using a standalone Python HTTP server and proxying thatserver via nginx, Apache etc. A very stable Python server is CherryPy. This part of the documentationshows you how to combine your WSGI application with the CherryPy WSGIserver and how to configure the webserver for proxying. ### Creating a .py server To run your application you need a start-server.py file that starts upthe WSGI Server. It looks something along these lines: ~~~ from cherrypy import wsgiserver from yourapplication import make_app server = wsgiserver.CherryPyWSGIServer(('localhost', 8080), make_app()) try: server.start() except KeyboardInterrupt: server.stop() ~~~ If you now start the file the server will listen on localhost:8080. Keepin mind that WSGI applications behave slightly different for proxied setups.If you have not developed your application for proxying in mind, you canapply the [ProxyFix](# "werkzeug.contrib.fixers.ProxyFix") middleware. ### Configuring nginx As an example we show here how to configure nginx to proxy to the server. The basic nginx configuration looks like this: ~~~ location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; proxy_redirect default; } ~~~ Since Nginx doesn't start your server for you, you have to do it by yourself. Youcan either write an init.d script for that or execute it inside a screensession: ~~~ $ screen $ python start-server.py ~~~
';

FastCGI

最后更新于:2022-04-01 04:07:48

# FastCGI A very popular deployment setup on servers like [lighttpd](http://www.lighttpd.net/) [http://www.lighttpd.net/] and [nginx](http://nginx.net/) [http://nginx.net/]is FastCGI. To use your WSGI application with any of them you will needa FastCGI server first. The most popular one is [flup](http://trac.saddi.com/flup) [http://trac.saddi.com/flup] which we will use for this guide. Makesure to have it installed. ### Creating a .fcgi file First you need to create the FastCGI server file. Let's call ityourapplication.fcgi: ~~~ #!/usr/bin/python from flup.server.fcgi import WSGIServer from yourapplication import make_app if __name__ == '__main__': application = make_app() WSGIServer(application).run() ~~~ This is enough for Apache to work, however ngingx and older versions oflighttpd need a socket to be explicitly passed to communicate with the FastCGIserver. For that to work you need to pass the path to the socket to theWSGIServer: ~~~ WSGIServer(application, bindAddress='/path/to/fcgi.sock').run() ~~~ The path has to be the exact same path you define in the serverconfig. Save the yourapplication.fcgi file somewhere you will find it again.It makes sense to have that in /var/www/yourapplication or somethingsimilar. Make sure to set the executable bit on that file so that the serverscan execute it: ~~~ # chmod +x /var/www/yourapplication/yourapplication.fcgi ~~~ ### Configuring lighttpd A basic FastCGI configuration for lighttpd looks like this: ~~~ fastcgi.server = ("/yourapplication.fcgi" => (( "socket" => "/tmp/yourapplication-fcgi.sock", "bin-path" => "/var/www/yourapplication/yourapplication.fcgi", "check-local" => "disable", "max-procs" -> 1 )) ) alias.url = ( "/static/" => "/path/to/your/static" ) url.rewrite-once = ( "^(/static.*)$" => "$1", "^(/.*)$" => "/yourapplication.fcgi$1" ~~~ Remember to enable the FastCGI, alias and rewrite modules. This configurationbinds the application to /yourapplication. If you want the application towork in the URL root you have to work around a lighttpd bug with theLighttpdCGIRootFix middleware. Make sure to apply it only if you are mounting the application the URLroot. Also, see the Lighty docs for more information on [FastCGI and Python](http://redmine.lighttpd.net/wiki/lighttpd/Docs:ModFastCGI) [http://redmine.lighttpd.net/wiki/lighttpd/Docs:ModFastCGI] (note thatexplicitly passing a socket to run() is no longer necessary). ### Configuring nginx Installing FastCGI applications on nginx is a bit tricky because by defaultsome FastCGI parameters are not properly forwarded. A basic FastCGI configuration for nginx looks like this: ~~~ location /yourapplication/ { include fastcgi_params; if ($uri ~ ^/yourapplication/(.*)?) { set $path_url $1; } fastcgi_param PATH_INFO $path_url; fastcgi_param SCRIPT_NAME /yourapplication; fastcgi_pass unix:/tmp/yourapplication-fcgi.sock; } ~~~ This configuration binds the application to /yourapplication. If you wantto have it in the URL root it's a bit easier because you don't have to figureout how to calculate PATH_INFO and SCRIPT_NAME: ~~~ location /yourapplication/ { include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_NAME ""; fastcgi_pass unix:/tmp/yourapplication-fcgi.sock; } ~~~ Since Nginx doesn't load FastCGI apps, you have to do it by yourself. Youcan either write an init.d script for that or execute it inside a screensession: ~~~ $ screen $ /var/www/yourapplication/yourapplication.fcgi ~~~ ### Debugging FastCGI deployments tend to be hard to debug on most webservers. Very often theonly thing the server log tells you is something along the lines of “prematureend of headers”. In order to debug the application the only thing that canreally give you ideas why it breaks is switching to the correct user andexecuting the application by hand. This example assumes your application is called application.fcgi and that yourwebserver user is www-data: ~~~ $ su www-data $ cd /var/www/yourapplication $ python application.fcgi Traceback (most recent call last): File "yourapplication.fcg", line 4, in <module> ImportError: No module named yourapplication ~~~ In this case the error seems to be “yourapplication” not being on the pythonpath. Common problems are: - relative paths being used. Don't rely on the current working directory - the code depending on environment variables that are not set by theweb server. - different python interpreters being used.
';

mod_wsgi (Apache)

最后更新于:2022-04-01 04:07:46

# mod_wsgi (Apache) If you are using the [Apache](http://httpd.apache.org/) [http://httpd.apache.org/] webserver you should consider using [mod_wsgi](http://code.google.com/p/modwsgi/) [http://code.google.com/p/modwsgi/]. ### Installing mod_wsgi If you don't have mod_wsgi installed yet you have to either install it usinga package manager or compile it yourself. The mod_wsgi [installation instructions](http://code.google.com/p/modwsgi/wiki/QuickInstallationGuide) [http://code.google.com/p/modwsgi/wiki/QuickInstallationGuide] cover installation instructions forsource installations on UNIX systems. If you are using ubuntu / debian you can apt-get it and activate it as follows: ~~~ # apt-get install libapache2-mod-wsgi ~~~ On FreeBSD install mod_wsgi by compiling the www/mod_wsgi port or by usingpkg_add: ~~~ # pkg_add -r mod_wsgi ~~~ If you are using pkgsrc you can install mod_wsgi by compiling thewww/ap2-wsgi package. If you encounter segfaulting child processes after the first apache reload youcan safely ignore them. Just restart the server. ### Creating a .wsgi file To run your application you need a yourapplication.wsgi file. This filecontains the code mod_wsgi is executing on startup to get the applicationobject. The object called application in that file is then used asapplication. For most applications the following file should be sufficient: ~~~ from yourapplication import make_app application = make_app() ~~~ If you don't have a factory function for application creation but a singletoninstance you can directly import that one as application. Store that file somewhere where you will find it again (eg:/var/www/yourapplication) and make sure that yourapplication and allthe libraries that are in use are on the python load path. If you don'twant to install it system wide consider using a [virtual python](http://pypi.python.org/pypi/virtualenv) [http://pypi.python.org/pypi/virtualenv] instance. ### Configuring Apache The last thing you have to do is to create an Apache configuration file foryour application. In this example we are telling mod_wsgi to execute theapplication under a different user for security reasons: ~~~ <VirtualHost *> ServerName example.com WSGIDaemonProcess yourapplication user=user1 group=group1 processes=2 threads=5 WSGIScriptAlias / /var/www/yourapplication/yourapplication.wsgi <Directory /var/www/yourapplication> WSGIProcessGroup yourapplication WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> ~~~ For more information consult the [mod_wsgi wiki](http://code.google.com/p/modwsgi/wiki/) [http://code.google.com/p/modwsgi/wiki/].
';

CGI

最后更新于:2022-04-01 04:07:43

# CGI If all other deployment methods do not work, CGI will work for sure. CGIis supported by all major servers but usually has a less-than-optimalperformance. This is also the way you can use a Werkzeug application on Google's[AppEngine](http://code.google.com/appengine/) [http://code.google.com/appengine/], there however the execution does happen in a CGI-likeenvironment. The application's performance is unaffected because of that. ### Creating a .cgi file First you need to create the CGI application file. Let's call ityourapplication.cgi: ~~~ #!/usr/bin/python from wsgiref.handlers import CGIHandler from yourapplication import make_app application = make_app() CGIHandler().run(application) ~~~ If you're running Python 2.4 you will need the [wsgiref](http://docs.python.org/dev/library/wsgiref.html#module-wsgiref "(在 Python v3.5)") [http://docs.python.org/dev/library/wsgiref.html#module-wsgiref] package. Python2.5 and higher ship this as part of the standard library. ### Server Setup Usually there are two ways to configure the server. Either just copy the.cgi into a cgi-bin (and use mod_rerwite or something similar torewrite the URL) or let the server point to the file directly. In Apache for example you can put a like like this into the config: ~~~ ScriptAlias /app /path/to/the/application.cgi ~~~ For more information consult the documentation of your webserver.
';

部署

最后更新于:2022-04-01 04:07:41

# Application Deployment This section covers running your application in production on a webserver such as Apache or lighttpd. - [CGI](#) - [Creating a .cgi file](#) - [Server Setup](#) - [mod_wsgi (Apache)](#) - [Installing mod_wsgi](#) - [Creating a .wsgi file](#) - [Configuring Apache](#) - [FastCGI](#) - [Creating a .fcgi file](#) - [Configuring lighttpd](#) - [Configuring nginx](#) - [Debugging](#) - [HTTP Proxying](#) - [Creating a .py server](#) - [Configuring nginx](#)
';