HTTP Utilities

Werkzeug provides a couple of functions to parse and generate HTTP headers that are useful when implementing WSGI middlewares or whenever you are operating on a lower level layer. All this functionality is also exposed from request and response objects.

Datetime Functions

These functions simplify working with times in an HTTP context. Werkzeug produces timezone-aware datetime objects in UTC. When passing datetime objects to Werkzeug, it assumes any naive datetime is in UTC.

When comparing datetime values from Werkzeug, your own datetime objects must also be timezone-aware, or you must make the values from Werkzeug naive.

  • dt = datetime.now(timezone.utc) gets the current time in UTC.

  • dt = datetime(..., tzinfo=timezone.utc) creates a time in UTC.

  • dt = dt.replace(tzinfo=timezone.utc) makes a naive object aware by assuming it’s in UTC.

  • dt = dt.replace(tzinfo=None) makes an aware object naive.

werkzeug.http.parse_date(value)

Parse an RFC 2822 date into a timezone-aware datetime.datetime object, or None if parsing fails.

This is a wrapper for email.utils.parsedate_to_datetime(). It returns None if parsing fails instead of raising an exception, and always returns a timezone-aware datetime object. If the string doesn’t have timezone information, it is assumed to be UTC.

Parameters:

value (str | None) – A string with a supported date format.

Return type:

datetime | None

Changelog

Changed in version 2.0: Return a timezone-aware datetime object. Use email.utils.parsedate_to_datetime.

werkzeug.http.http_date(timestamp=None)

Format a datetime object or timestamp into an RFC 2822 date string.

This is a wrapper for email.utils.format_datetime(). It assumes naive datetime objects are in UTC instead of raising an exception.

Parameters:

timestamp (datetime | date | int | float | struct_time | None) – The datetime or timestamp to format. Defaults to the current time.

Return type:

str

Changelog

Changed in version 2.0: Use email.utils.format_datetime. Accept date objects.

Header Parsing

The following functions can be used to parse incoming HTTP headers. Because Python does not provide data structures with the semantics required by RFC 2616, Werkzeug implements some custom data structures that are documented separately.

werkzeug.http.parse_options_header(value)

Parse a header that consists of a value with key=value parameters separated by semicolons ;. For example, the Content-Type header.

parse_options_header("text/html; charset=UTF-8")
('text/html', {'charset': 'UTF-8'})

parse_options_header("")
("", {})

This is the reverse of dump_options_header().

This parses valid parameter parts as described in RFC 9110. Invalid parts are skipped.

This handles continuations and charsets as described in RFC 2231, although not as strictly as the RFC. Only ASCII, UTF-8, and ISO-8859-1 charsets are accepted, otherwise the value remains quoted.

Clients may not be consistent in how they handle a quote character within a quoted value. The HTML Standard replaces it with %22 in multipart form data. RFC 9110 uses backslash escapes in HTTP headers. Both are decoded to the " character.

Clients may not be consistent in how they handle non-ASCII characters. HTML documents must declare <meta charset=UTF-8>, otherwise browsers may replace with HTML character references, which can be decoded using html.unescape().

Parameters:

value (str | None) – The header value to parse.

Returns:

(value, options), where options is a dict

Return type:

tuple[str, dict[str, str]]

Changelog

Changed in version 2.3: Invalid parts, such as keys with no value, quoted keys, and incorrectly quoted values, are discarded instead of treating as None.

Changed in version 2.3: Only ASCII, UTF-8, and ISO-8859-1 are accepted for charset values.

Changed in version 2.3: Escaped quotes in quoted values, like %22 and \", are handled.

Changed in version 2.2: Option names are always converted to lowercase.

Changed in version 2.2: The multiple parameter was removed.

Changed in version 0.15: RFC 2231 parameter continuations are handled.

New in version 0.5.

werkzeug.http.parse_set_header(value, on_update=None)

Parse a set-like header and return a HeaderSet object:

>>> hs = parse_set_header('token, "quoted value"')

The return value is an object that treats the items case-insensitively and keeps the order of the items:

>>> 'TOKEN' in hs
True
>>> hs.index('quoted value')
1
>>> hs
HeaderSet(['token', 'quoted value'])

To create a header from the HeaderSet again, use the dump_header() function.

Parameters:
  • value (str | None) – a set header to be parsed.

  • on_update (Callable[[HeaderSet], None] | None) – an optional callable that is called every time a value on the HeaderSet object is changed.

Returns:

a HeaderSet

Return type:

HeaderSet

werkzeug.http.parse_list_header(value)

Parse a header value that consists of a list of comma separated items according to RFC 9110.

This extends urllib.request.parse_http_list() to remove surrounding quotes from values.

parse_list_header('token, "quoted value"')
['token', 'quoted value']

This is the reverse of dump_header().

Parameters:

value (str) – The header value to parse.

Return type:

list[str]

werkzeug.http.parse_dict_header(value)

Parse a list header using parse_list_header(), then parse each item as a key=value pair.

parse_dict_header('a=b, c="d, e", f')
{"a": "b", "c": "d, e", "f": None}

This is the reverse of dump_header().

If a key does not have a value, it is None.

This handles charsets for values as described in RFC 2231. Only ASCII, UTF-8, and ISO-8859-1 charsets are accepted, otherwise the value remains quoted.

Parameters:

value (str) – The header value to parse.

Return type:

dict[str, str | None]

Changed in version 3.0: Passing bytes is not supported.

Changed in version 3.0: The cls argument is removed.

Changelog

Changed in version 2.3: Added support for key*=charset''value encoded items.

Changed in version 0.9: The cls argument was added.

werkzeug.http.parse_accept_header(value: str | None) Accept
werkzeug.http.parse_accept_header(value: str | None, cls: type[_TAnyAccept]) _TAnyAccept

Parse an Accept header according to RFC 9110.

Returns an Accept instance, which can sort and inspect items based on their quality parameter. When parsing Accept-Charset, Accept-Encoding, or Accept-Language, pass the appropriate Accept subclass.

Parameters:
  • value – The header value to parse.

  • cls – The Accept class to wrap the result in.

Returns:

An instance of cls.

Changelog

Changed in version 2.3: Parse according to RFC 9110. Items with invalid q values are skipped.

werkzeug.http.parse_cache_control_header(value: str | None, on_update: Callable[[_TAnyCC], None] | None, cls: None = None) RequestCacheControl
werkzeug.http.parse_cache_control_header(value: str | None, on_update: Callable[[_TAnyCC], None] | None, cls: type[_TAnyCC]) _TAnyCC

Parse a cache control header. The RFC differs between response and request cache control, this method does not. It’s your responsibility to not use the wrong control statements.

Changelog

New in version 0.5: The cls was added. If not specified an immutable RequestCacheControl is returned.

Parameters:
  • value – a cache control header to be parsed.

  • on_update – an optional callable that is called every time a value on the CacheControl object is changed.

  • cls – the class for the returned object. By default RequestCacheControl is used.

Returns:

a cls object.

werkzeug.http.parse_if_range_header(value)

Parses an if-range header which can be an etag or a date. Returns a IfRange object.

Changelog

Changed in version 2.0: If the value represents a datetime, it is timezone-aware.

New in version 0.7.

Parameters:

value (str | None) –

Return type:

IfRange

werkzeug.http.parse_range_header(value, make_inclusive=True)

Parses a range header into a Range object. If the header is missing or malformed None is returned. ranges is a list of (start, stop) tuples where the ranges are non-inclusive.

Changelog

New in version 0.7.

Parameters:
  • value (str | None) –

  • make_inclusive (bool) –

Return type:

Range | None

werkzeug.http.parse_content_range_header(value, on_update=None)

Parses a range header into a ContentRange object or None if parsing is not possible.

Changelog

New in version 0.7.

Parameters:
  • value (str | None) – a content range header to be parsed.

  • on_update (Callable[[ContentRange], None] | None) – an optional callable that is called every time a value on the ContentRange object is changed.

Return type:

ContentRange | None

Header Utilities

The following utilities operate on HTTP headers well but do not parse them. They are useful if you’re dealing with conditional responses or if you want to proxy arbitrary requests but want to remove WSGI-unsupported hop-by-hop headers. Also there is a function to create HTTP header strings from the parsed data.

werkzeug.http.is_entity_header(header)

Check if a header is an entity header.

Changelog

New in version 0.5.

Parameters:

header (str) – the header to test.

Returns:

True if it’s an entity header, False otherwise.

Return type:

bool

werkzeug.http.is_hop_by_hop_header(header)

Check if a header is an HTTP/1.1 “Hop-by-Hop” header.

Changelog

New in version 0.5.

Parameters:

header (str) – the header to test.

Returns:

True if it’s an HTTP/1.1 “Hop-by-Hop” header, False otherwise.

Return type:

bool

werkzeug.http.remove_entity_headers(headers, allowed=('expires', 'content-location'))

Remove all entity headers from a list or Headers object. This operation works in-place. Expires and Content-Location headers are by default not removed. The reason for this is RFC 2616 section 10.3.5 which specifies some entity headers that should be sent.

Changelog

Changed in version 0.5: added allowed parameter.

Parameters:
  • headers (Headers | list[tuple[str, str]]) – a list or Headers object.

  • allowed (Iterable[str]) – a list of headers that should still be allowed even though they are entity headers.

Return type:

None

werkzeug.http.remove_hop_by_hop_headers(headers)

Remove all HTTP/1.1 “Hop-by-Hop” headers from a list or Headers object. This operation works in-place.

Changelog

New in version 0.5.

Parameters:

headers (Headers | list[tuple[str, str]]) – a list or Headers object.

Return type:

None

werkzeug.http.is_byte_range_valid(start, stop, length)

Checks if a given byte content range is valid for the given length.

Changelog

New in version 0.7.

Parameters:
  • start (int | None) –

  • stop (int | None) –

  • length (int | None) –

Return type:

bool

werkzeug.http.quote_header_value(value, allow_token=True)

Add double quotes around a header value. If the header contains only ASCII token characters, it will be returned unchanged. If the header contains " or \ characters, they will be escaped with an additional \ character.

This is the reverse of unquote_header_value().

Parameters:
  • value (Any) – The value to quote. Will be converted to a string.

  • allow_token (bool) – Disable to quote the value even if it only has token characters.

Return type:

str

Changed in version 3.0: Passing bytes is not supported.

Changed in version 3.0: The extra_chars parameter is removed.

Changelog

Changed in version 2.3: The value is quoted if it is the empty string.

New in version 0.5.

werkzeug.http.unquote_header_value(value)

Remove double quotes and decode slash-escaped " and \ characters in a header value.

This is the reverse of quote_header_value().

Parameters:

value (str) – The header value to unquote.

Return type:

str

Changed in version 3.0: The is_filename parameter is removed.

werkzeug.http.dump_header(iterable)

Produce a header value from a list of items or key=value pairs, separated by commas ,.

This is the reverse of parse_list_header(), parse_dict_header(), and parse_set_header().

If a value contains non-token characters, it will be quoted.

If a value is None, the key is output alone.

In some keys for some headers, a UTF-8 value can be encoded using a special key*=UTF-8''value form, where value is percent encoded. This function will not produce that format automatically, but if a given key ends with an asterisk *, the value is assumed to have that form and will not be quoted further.

dump_header(["foo", "bar baz"])
'foo, "bar baz"'

dump_header({"foo": "bar baz"})
'foo="bar baz"'
Parameters:

iterable (dict[str, Any] | Iterable[Any]) – The items to create a header from.

Return type:

str

Changed in version 3.0: The allow_token parameter is removed.

Changelog

Changed in version 2.2.3: If a key ends with *, its value will not be quoted.

Cookies

Parse a cookie from a string or WSGI environ.

The same key can be provided multiple times, the values are stored in-order. The default MultiDict will have the first value first, and all values can be retrieved with MultiDict.getlist().

Parameters:
  • header (WSGIEnvironment | str | None) – The cookie header as a string, or a WSGI environ dict with a HTTP_COOKIE key.

  • cls (type[ds.MultiDict] | None) – A dict-like class to store the parsed cookies in. Defaults to MultiDict.

Return type:

ds.MultiDict[str, str]

Changed in version 3.0: Passing bytes, and the charset and errors parameters, were removed.

Changelog

Changed in version 1.0: Returns a MultiDict instead of a TypeConversionDict.

Changed in version 0.5: Returns a TypeConversionDict instead of a regular dict. The cls parameter was added.

Create a Set-Cookie header without the Set-Cookie prefix.

The return value is usually restricted to ascii as the vast majority of values are properly escaped, but that is no guarantee. It’s tunneled through latin1 as required by PEP 3333.

The return value is not ASCII safe if the key contains unicode characters. This is technically against the specification but happens in the wild. It’s strongly recommended to not use non-ASCII values for the keys.

Parameters:
  • max_age (timedelta | int | None) – should be a number of seconds, or None (default) if the cookie should last only as long as the client’s browser session. Additionally timedelta objects are accepted, too.

  • expires (str | datetime | int | float | None) – should be a datetime object or unix timestamp.

  • path (str | None) – limits the cookie to a given path, per default it will span the whole domain.

  • domain (str | None) – Use this if you want to set a cross-domain cookie. For example, domain="example.com" will set a cookie that is readable by the domain www.example.com, foo.example.com etc. Otherwise, a cookie will only be readable by the domain that set it.

  • secure (bool) – The cookie will only be available via HTTPS

  • httponly (bool) – disallow JavaScript to access the cookie. This is an extension to the cookie standard and probably not supported by all browsers.

  • charset – the encoding for string values.

  • sync_expires (bool) – automatically set expires if max_age is defined but expires not.

  • max_size (int) – Warn if the final header value exceeds this size. The default, 4093, should be safely supported by most browsers. Set to 0 to disable this check.

  • samesite (str | None) – Limits the scope of the cookie such that it will only be attached to requests if those requests are same-site.

  • key (str) –

  • value (str) –

Return type:

str

Changed in version 3.0: Passing bytes, and the charset parameter, were removed.

Changelog

Changed in version 2.3.3: The path parameter is / by default.

Changed in version 2.3.1: The value allows more characters without quoting.

Changed in version 2.3: localhost and other names without a dot are allowed for the domain. A leading dot is ignored.

Changed in version 2.3: The path parameter is None by default.

Changed in version 1.0.0: The string 'None' is accepted for samesite.

Conditional Response Helpers

For conditional responses the following functions might be useful:

werkzeug.http.parse_etags(value)

Parse an etag header.

Parameters:

value (str | None) – the tag header to parse

Returns:

an ETags object.

Return type:

ETags

werkzeug.http.quote_etag(etag, weak=False)

Quote an etag.

Parameters:
  • etag (str) – the etag to quote.

  • weak (bool) – set to True to tag it “weak”.

Return type:

str

werkzeug.http.unquote_etag(etag)

Unquote a single etag:

>>> unquote_etag('W/"bar"')
('bar', True)
>>> unquote_etag('"bar"')
('bar', False)
Parameters:

etag (str | None) – the etag identifier to unquote.

Returns:

a (etag, weak) tuple.

Return type:

tuple[str, bool] | tuple[None, None]

werkzeug.http.generate_etag(data)

Generate an etag for some data.

Changelog

Changed in version 2.0: Use SHA-1. MD5 may not be available in some environments.

Parameters:

data (bytes) –

Return type:

str

werkzeug.http.is_resource_modified(environ, etag=None, data=None, last_modified=None, ignore_if_range=True)

Convenience method for conditional requests.

Parameters:
  • environ (WSGIEnvironment) – the WSGI environment of the request to be checked.

  • etag (str | None) – the etag for the response for comparison.

  • data (bytes | None) – or alternatively the data of the response to automatically generate an etag using generate_etag().

  • last_modified (datetime | str | None) – an optional date of the last modification.

  • ignore_if_range (bool) – If False, If-Range header will be taken into account.

Returns:

True if the resource was modified, otherwise False.

Return type:

bool

Changelog

Changed in version 2.0: SHA-1 is used to generate an etag value for the data. MD5 may not be available in some environments.

Changed in version 1.0.0: The check is run for methods other than GET and HEAD.

Constants

werkzeug.http.HTTP_STATUS_CODES

A dict of status code -> default status message pairs. This is used by the wrappers and other places where an integer status code is expanded to a string throughout Werkzeug.

Form Data Parsing

Werkzeug provides the form parsing functions separately from the request object so that you can access form data from a plain WSGI environment.

The following formats are currently supported by the form data parser:

  • application/x-www-form-urlencoded

  • multipart/form-data

Nested multipart is not currently supported (Werkzeug 0.9), but it isn’t used by any of the modern web browsers.

Usage example:

>>> from io import BytesIO
>>> from werkzeug.formparser import parse_form_data
>>> data = (
...     b'--foo\r\nContent-Disposition: form-data; name="test"\r\n'
...     b"\r\nHello World!\r\n--foo--"
... )
>>> environ = {
...     "wsgi.input": BytesIO(data),
...     "CONTENT_LENGTH": str(len(data)),
...     "CONTENT_TYPE": "multipart/form-data; boundary=foo",
...     "REQUEST_METHOD": "POST",
... }
>>> stream, form, files = parse_form_data(environ)
>>> stream.read()
b''
>>> form['test']
'Hello World!'
>>> not files
True

Normally the WSGI environment is provided by the WSGI gateway with the incoming data as part of it. If you want to generate such fake-WSGI environments for unittesting you might want to use the create_environ() function or the EnvironBuilder instead.

class werkzeug.formparser.FormDataParser(stream_factory=None, max_form_memory_size=None, max_content_length=None, cls=None, silent=True, *, max_form_parts=None)

This class implements parsing of form data for Werkzeug. By itself it can parse multipart and url encoded form data. It can be subclassed and extended but for most mimetypes it is a better idea to use the untouched stream and expose it as separate attributes on a request object.

Parameters:
  • stream_factory (TStreamFactory | None) – An optional callable that returns a new read and writeable file descriptor. This callable works the same as Response._get_file_stream().

  • max_form_memory_size (int | None) – the maximum number of bytes to be accepted for in-memory stored form data. If the data exceeds the value specified an RequestEntityTooLarge exception is raised.

  • max_content_length (int | None) – If this is provided and the transmitted data is longer than this value an RequestEntityTooLarge exception is raised.

  • cls (type[MultiDict] | None) – an optional dict class to use. If this is not specified or None the default MultiDict is used.

  • silent (bool) – If set to False parsing errors will not be caught.

  • max_form_parts (int | None) – The maximum number of multipart parts to be parsed. If this is exceeded, a RequestEntityTooLarge exception is raised.

Changed in version 3.0: The charset and errors parameters were removed.

Changed in version 3.0: The parse_functions attribute and get_parse_func methods were removed.

Changelog

Changed in version 2.2.3: Added the max_form_parts parameter.

New in version 0.8.

werkzeug.formparser.parse_form_data(environ, stream_factory=None, max_form_memory_size=None, max_content_length=None, cls=None, silent=True, *, max_form_parts=None)

Parse the form data in the environ and return it as tuple in the form (stream, form, files). You should only call this method if the transport method is POST, PUT, or PATCH.

If the mimetype of the data transmitted is multipart/form-data the files multidict will be filled with FileStorage objects. If the mimetype is unknown the input stream is wrapped and returned as first argument, else the stream is empty.

This is a shortcut for the common usage of FormDataParser.

Parameters:
  • environ (WSGIEnvironment) – the WSGI environment to be used for parsing.

  • stream_factory (TStreamFactory | None) – An optional callable that returns a new read and writeable file descriptor. This callable works the same as Response._get_file_stream().

  • max_form_memory_size (int | None) – the maximum number of bytes to be accepted for in-memory stored form data. If the data exceeds the value specified an RequestEntityTooLarge exception is raised.

  • max_content_length (int | None) – If this is provided and the transmitted data is longer than this value an RequestEntityTooLarge exception is raised.

  • cls (type[MultiDict] | None) – an optional dict class to use. If this is not specified or None the default MultiDict is used.

  • silent (bool) – If set to False parsing errors will not be caught.

  • max_form_parts (int | None) – The maximum number of multipart parts to be parsed. If this is exceeded, a RequestEntityTooLarge exception is raised.

Returns:

A tuple in the form (stream, form, files).

Return type:

t_parse_result

Changed in version 3.0: The charset and errors parameters were removed.

Changelog

Changed in version 2.3: Added the max_form_parts parameter.

New in version 0.5.1: Added the silent parameter.

New in version 0.5: Added the max_form_memory_size, max_content_length, and cls parameters.