switching to high quality piper tts and added label translations
This commit is contained in:
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,20 @@
|
||||
Copyright (c) 2020 Peter Odding
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@@ -0,0 +1,291 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: coloredlogs
|
||||
Version: 15.0.1
|
||||
Summary: Colored terminal output for Python's logging module
|
||||
Home-page: https://coloredlogs.readthedocs.io
|
||||
Author: Peter Odding
|
||||
Author-email: peter@peterodding.com
|
||||
License: MIT
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Environment :: Console
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Intended Audience :: Information Technology
|
||||
Classifier: Intended Audience :: System Administrators
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Operating System :: MacOS
|
||||
Classifier: Operating System :: Microsoft :: Windows
|
||||
Classifier: Operating System :: POSIX
|
||||
Classifier: Operating System :: Unix
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
||||
Classifier: Topic :: Communications
|
||||
Classifier: Topic :: Scientific/Engineering :: Human Machine Interfaces
|
||||
Classifier: Topic :: Software Development
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Software Development :: User Interfaces
|
||||
Classifier: Topic :: System
|
||||
Classifier: Topic :: System :: Shells
|
||||
Classifier: Topic :: System :: System Shells
|
||||
Classifier: Topic :: System :: Console Fonts
|
||||
Classifier: Topic :: System :: Logging
|
||||
Classifier: Topic :: System :: Systems Administration
|
||||
Classifier: Topic :: Terminals
|
||||
Classifier: Topic :: Utilities
|
||||
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
|
||||
Requires-Dist: humanfriendly (>=9.1)
|
||||
Provides-Extra: cron
|
||||
Requires-Dist: capturer (>=2.4) ; extra == 'cron'
|
||||
|
||||
coloredlogs: Colored terminal output for Python's logging module
|
||||
================================================================
|
||||
|
||||
.. image:: https://travis-ci.org/xolox/python-coloredlogs.svg?branch=master
|
||||
:target: https://travis-ci.org/xolox/python-coloredlogs
|
||||
|
||||
.. image:: https://coveralls.io/repos/github/xolox/python-coloredlogs/badge.svg?branch=master
|
||||
:target: https://coveralls.io/github/xolox/python-coloredlogs?branch=master
|
||||
|
||||
The `coloredlogs` package enables colored terminal output for Python's logging_
|
||||
module. The ColoredFormatter_ class inherits from `logging.Formatter`_ and uses
|
||||
`ANSI escape sequences`_ to render your logging messages in color. It uses only
|
||||
standard colors so it should work on any UNIX terminal. It's currently tested
|
||||
on Python 2.7, 3.5+ and PyPy (2 and 3). On Windows `coloredlogs` automatically
|
||||
tries to enable native ANSI support (on up-to-date Windows 10 installations)
|
||||
and falls back on using colorama_ (if installed). Here is a screen shot of the
|
||||
demo that is printed when the command ``coloredlogs --demo`` is executed:
|
||||
|
||||
.. image:: https://coloredlogs.readthedocs.io/en/latest/_images/defaults.png
|
||||
|
||||
Note that the screenshot above includes custom logging levels defined by my
|
||||
verboselogs_ package: if you install both `coloredlogs` and `verboselogs` it
|
||||
will Just Work (`verboselogs` is of course not required to use
|
||||
`coloredlogs`).
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
The `coloredlogs` package is available on PyPI_ which means installation should
|
||||
be as simple as:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install coloredlogs
|
||||
|
||||
There's actually a multitude of ways to install Python packages (e.g. the `per
|
||||
user site-packages directory`_, `virtual environments`_ or just installing
|
||||
system wide) and I have no intention of getting into that discussion here, so
|
||||
if this intimidates you then read up on your options before returning to these
|
||||
instructions 😉.
|
||||
|
||||
Optional dependencies
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Native ANSI support on Windows requires an up-to-date Windows 10 installation.
|
||||
If this is not working for you then consider installing the colorama_ package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install colorama
|
||||
|
||||
Once colorama_ is installed it will be used automatically.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Here's an example of how easy it is to get started:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import coloredlogs, logging
|
||||
|
||||
# Create a logger object.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# By default the install() function installs a handler on the root logger,
|
||||
# this means that log messages from your code and log messages from the
|
||||
# libraries that you use will all show up on the terminal.
|
||||
coloredlogs.install(level='DEBUG')
|
||||
|
||||
# If you don't want to see log messages from libraries, you can pass a
|
||||
# specific logger object to the install() function. In this case only log
|
||||
# messages originating from that logger will show up on the terminal.
|
||||
coloredlogs.install(level='DEBUG', logger=logger)
|
||||
|
||||
# Some examples.
|
||||
logger.debug("this is a debugging message")
|
||||
logger.info("this is an informational message")
|
||||
logger.warning("this is a warning message")
|
||||
logger.error("this is an error message")
|
||||
logger.critical("this is a critical message")
|
||||
|
||||
Format of log messages
|
||||
----------------------
|
||||
|
||||
The ColoredFormatter_ class supports user defined log formats so you can use
|
||||
any log format you like. The default log format is as follows::
|
||||
|
||||
%(asctime)s %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s
|
||||
|
||||
This log format results in the following output::
|
||||
|
||||
2015-10-23 03:32:22 peter-macbook coloredlogs.demo[30462] DEBUG message with level 'debug'
|
||||
2015-10-23 03:32:23 peter-macbook coloredlogs.demo[30462] VERBOSE message with level 'verbose'
|
||||
2015-10-23 03:32:24 peter-macbook coloredlogs.demo[30462] INFO message with level 'info'
|
||||
...
|
||||
|
||||
You can customize the log format and styling using environment variables as
|
||||
well as programmatically, please refer to the `online documentation`_ for
|
||||
details.
|
||||
|
||||
Enabling millisecond precision
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you're switching from `logging.basicConfig()`_ to `coloredlogs.install()`_
|
||||
you may notice that timestamps no longer include milliseconds. This is because
|
||||
coloredlogs doesn't output milliseconds in timestamps unless you explicitly
|
||||
tell it to. There are three ways to do that:
|
||||
|
||||
1. The easy way is to pass the `milliseconds` argument to `coloredlogs.install()`_::
|
||||
|
||||
coloredlogs.install(milliseconds=True)
|
||||
|
||||
This became supported in `release 7.1`_ (due to `#16`_).
|
||||
|
||||
2. Alternatively you can change the log format `to include 'msecs'`_::
|
||||
|
||||
%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s
|
||||
|
||||
Here's what the call to `coloredlogs.install()`_ would then look like::
|
||||
|
||||
coloredlogs.install(fmt='%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s')
|
||||
|
||||
Customizing the log format also enables you to change the delimiter that
|
||||
separates seconds from milliseconds (the comma above). This became possible
|
||||
in `release 3.0`_ which added support for user defined log formats.
|
||||
|
||||
3. If the use of ``%(msecs)d`` isn't flexible enough you can instead add ``%f``
|
||||
to the date/time format, it will be replaced by the value of ``%(msecs)03d``.
|
||||
Support for the ``%f`` directive was added to `release 9.3`_ (due to `#45`_).
|
||||
|
||||
Custom logging fields
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following custom log format fields are supported:
|
||||
|
||||
- ``%(hostname)s`` provides the hostname of the local system.
|
||||
- ``%(programname)s`` provides the name of the currently running program.
|
||||
- ``%(username)s`` provides the username of the currently logged in user.
|
||||
|
||||
When `coloredlogs.install()`_ detects that any of these fields are used in the
|
||||
format string the applicable logging.Filter_ subclasses are automatically
|
||||
registered to populate the relevant log record fields.
|
||||
|
||||
Changing text styles and colors
|
||||
-------------------------------
|
||||
|
||||
The online documentation contains `an example of customizing the text styles and
|
||||
colors <https://coloredlogs.readthedocs.io/en/latest/api.html#changing-the-colors-styles>`_.
|
||||
|
||||
Colored output from cron
|
||||
------------------------
|
||||
|
||||
When `coloredlogs` is used in a cron_ job, the output that's e-mailed to you by
|
||||
cron won't contain any ANSI escape sequences because `coloredlogs` realizes
|
||||
that it's not attached to an interactive terminal. If you'd like to have colors
|
||||
e-mailed to you by cron there are two ways to make it happen:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Modifying your crontab
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Here's an example of a minimal crontab::
|
||||
|
||||
MAILTO="your-email-address@here"
|
||||
CONTENT_TYPE="text/html"
|
||||
* * * * * root coloredlogs --to-html your-command
|
||||
|
||||
The ``coloredlogs`` program is installed when you install the `coloredlogs`
|
||||
Python package. When you execute ``coloredlogs --to-html your-command`` it runs
|
||||
``your-command`` under the external program ``script`` (you need to have this
|
||||
installed). This makes ``your-command`` think that it's attached to an
|
||||
interactive terminal which means it will output ANSI escape sequences which
|
||||
will then be converted to HTML by the ``coloredlogs`` program. Yes, this is a
|
||||
bit convoluted, but it works great :-)
|
||||
|
||||
Modifying your Python code
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ColoredCronMailer_ class provides a context manager that automatically
|
||||
enables HTML output when the ``$CONTENT_TYPE`` variable has been correctly set
|
||||
in the crontab.
|
||||
|
||||
This requires my capturer_ package which you can install using ``pip install
|
||||
'coloredlogs[cron]'``. The ``[cron]`` extra will pull in capturer_ 2.4 or newer
|
||||
which is required to capture the output while silencing it - otherwise you'd
|
||||
get duplicate output in the emails sent by ``cron``.
|
||||
|
||||
The context manager can also be used to retroactively silence output that has
|
||||
already been produced, this can be useful to avoid spammy cron jobs that have
|
||||
nothing useful to do but still email their output to the system administrator
|
||||
every few minutes :-).
|
||||
|
||||
Contact
|
||||
-------
|
||||
|
||||
The latest version of `coloredlogs` is available on PyPI_ and GitHub_. The
|
||||
`online documentation`_ is available on Read The Docs and includes a
|
||||
changelog_. For bug reports please create an issue on GitHub_. If you have
|
||||
questions, suggestions, etc. feel free to send me an e-mail at
|
||||
`peter@peterodding.com`_.
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This software is licensed under the `MIT license`_.
|
||||
|
||||
© 2020 Peter Odding.
|
||||
|
||||
|
||||
.. External references:
|
||||
.. _#16: https://github.com/xolox/python-coloredlogs/issues/16
|
||||
.. _#45: https://github.com/xolox/python-coloredlogs/issues/45
|
||||
.. _ANSI escape sequences: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
|
||||
.. _capturer: https://pypi.org/project/capturer
|
||||
.. _changelog: https://coloredlogs.readthedocs.org/en/latest/changelog.html
|
||||
.. _colorama: https://pypi.org/project/colorama
|
||||
.. _ColoredCronMailer: https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.converter.ColoredCronMailer
|
||||
.. _ColoredFormatter: https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.ColoredFormatter
|
||||
.. _coloredlogs.install(): https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.install
|
||||
.. _cron: https://en.wikipedia.org/wiki/Cron
|
||||
.. _GitHub: https://github.com/xolox/python-coloredlogs
|
||||
.. _logging.basicConfig(): https://docs.python.org/2/library/logging.html#logging.basicConfig
|
||||
.. _logging.Filter: https://docs.python.org/3/library/logging.html#filter-objects
|
||||
.. _logging.Formatter: https://docs.python.org/2/library/logging.html#logging.Formatter
|
||||
.. _logging: https://docs.python.org/2/library/logging.html
|
||||
.. _MIT license: https://en.wikipedia.org/wiki/MIT_License
|
||||
.. _online documentation: https://coloredlogs.readthedocs.io/
|
||||
.. _per user site-packages directory: https://www.python.org/dev/peps/pep-0370/
|
||||
.. _peter@peterodding.com: peter@peterodding.com
|
||||
.. _PyPI: https://pypi.org/project/coloredlogs
|
||||
.. _release 3.0: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-3-0-2015-10-23
|
||||
.. _release 7.1: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-7-1-2017-07-15
|
||||
.. _release 9.3: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-9-3-2018-04-29
|
||||
.. _to include 'msecs': https://stackoverflow.com/questions/6290739/python-logging-use-milliseconds-in-time-format
|
||||
.. _verboselogs: https://pypi.org/project/verboselogs
|
||||
.. _virtual environments: http://docs.python-guide.org/en/latest/dev/virtualenvs/
|
||||
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
../../../bin/coloredlogs,sha256=q-s7Du259fyzdKUNCF3nvGID0d2Ikl-6lvZd83I8vLY,259
|
||||
coloredlogs-15.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
coloredlogs-15.0.1.dist-info/LICENSE.txt,sha256=C-9P-xhVtlVwME8gGMbUTkurl71d-HFuzXgAAU1xcmc,1056
|
||||
coloredlogs-15.0.1.dist-info/METADATA,sha256=FmO6unRvNe77JJ2UU0XYhWbMTzZDhj6zGpEljDysZ0w,12387
|
||||
coloredlogs-15.0.1.dist-info/RECORD,,
|
||||
coloredlogs-15.0.1.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110
|
||||
coloredlogs-15.0.1.dist-info/entry_points.txt,sha256=uOmYMPjy7X6effmVh56i3U3N9S13DTRozJOAncBtZG0,54
|
||||
coloredlogs-15.0.1.dist-info/top_level.txt,sha256=D3LuRvusEQ8xKUhPowPUmPWNm88FSw8ts3x2ulvCSyQ,12
|
||||
coloredlogs.pth,sha256=3ag6hVmG76XNh_AkiwGZwAhusOjn_s59Z0GVnFzjlTY,147
|
||||
coloredlogs/__init__.py,sha256=FxaiI1pQ0JzRYbx2K5f_Ar9Wa1meZLg5XKGWHKgPBNo,64423
|
||||
coloredlogs/__pycache__/__init__.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/cli.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/demo.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/syslog.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/tests.cpython-312.pyc,,
|
||||
coloredlogs/cli.py,sha256=iaHgUVeHPl5iIecKG1WJMslOJEKdhEusbxj7B4r1sHI,3493
|
||||
coloredlogs/converter/__init__.py,sha256=t08e9V8-Ed9y9eNSYv3x1DH0hXgaXHJmQWp54j_a2nk,18353
|
||||
coloredlogs/converter/__pycache__/__init__.cpython-312.pyc,,
|
||||
coloredlogs/converter/__pycache__/colors.cpython-312.pyc,,
|
||||
coloredlogs/converter/colors.py,sha256=1N2PpCa-EYMMWyC6Dw7SG2WX1gC67F9F5OteeFW52SM,5387
|
||||
coloredlogs/demo.py,sha256=CaPVGOB6rCtJDyeqk0VBLi0t05jgUEniuB__VYo5kho,1825
|
||||
coloredlogs/syslog.py,sha256=-XyRUzI20QKTTtVqhJAtfYc-ljw0crsjxqjWDjqrcqU,11849
|
||||
coloredlogs/tests.py,sha256=oe_pWnTGfNEI7spY3_VA62fX_09ZEVFVERvr9sBMqnM,30791
|
||||
@@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.34.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
[console_scripts]
|
||||
coloredlogs = coloredlogs.cli:main
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
coloredlogs
|
||||
@@ -0,0 +1 @@
|
||||
import os; exec('try: __import__("coloredlogs").auto_install() if os.environ.get("COLOREDLOGS_AUTO_INSTALL") else None\nexcept ImportError: pass')
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,106 @@
|
||||
# Command line interface for the coloredlogs package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: December 15, 2017
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""
|
||||
Usage: coloredlogs [OPTIONS] [ARGS]
|
||||
|
||||
The coloredlogs program provides a simple command line interface for the Python
|
||||
package by the same name.
|
||||
|
||||
Supported options:
|
||||
|
||||
-c, --convert, --to-html
|
||||
|
||||
Capture the output of an external command (given by the positional
|
||||
arguments) and convert ANSI escape sequences in the output to HTML.
|
||||
|
||||
If the `coloredlogs' program is attached to an interactive terminal it will
|
||||
write the generated HTML to a temporary file and open that file in a web
|
||||
browser, otherwise the generated HTML will be written to standard output.
|
||||
|
||||
This requires the `script' program to fake the external command into
|
||||
thinking that it's attached to an interactive terminal (in order to enable
|
||||
output of ANSI escape sequences).
|
||||
|
||||
If the command didn't produce any output then no HTML will be produced on
|
||||
standard output, this is to avoid empty emails from cron jobs.
|
||||
|
||||
-d, --demo
|
||||
|
||||
Perform a simple demonstration of the coloredlogs package to show the
|
||||
colored logging on an interactive terminal.
|
||||
|
||||
-h, --help
|
||||
|
||||
Show this message and exit.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import functools
|
||||
import getopt
|
||||
import logging
|
||||
import sys
|
||||
import tempfile
|
||||
import webbrowser
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.terminal import connected_to_terminal, output, usage, warning
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs.converter import capture, convert
|
||||
from coloredlogs.demo import demonstrate_colored_logging
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def main():
|
||||
"""Command line interface for the ``coloredlogs`` program."""
|
||||
actions = []
|
||||
try:
|
||||
# Parse the command line arguments.
|
||||
options, arguments = getopt.getopt(sys.argv[1:], 'cdh', [
|
||||
'convert', 'to-html', 'demo', 'help',
|
||||
])
|
||||
# Map command line options to actions.
|
||||
for option, value in options:
|
||||
if option in ('-c', '--convert', '--to-html'):
|
||||
actions.append(functools.partial(convert_command_output, *arguments))
|
||||
arguments = []
|
||||
elif option in ('-d', '--demo'):
|
||||
actions.append(demonstrate_colored_logging)
|
||||
elif option in ('-h', '--help'):
|
||||
usage(__doc__)
|
||||
return
|
||||
else:
|
||||
assert False, "Programming error: Unhandled option!"
|
||||
if not actions:
|
||||
usage(__doc__)
|
||||
return
|
||||
except Exception as e:
|
||||
warning("Error: %s", e)
|
||||
sys.exit(1)
|
||||
for function in actions:
|
||||
function()
|
||||
|
||||
|
||||
def convert_command_output(*command):
|
||||
"""
|
||||
Command line interface for ``coloredlogs --to-html``.
|
||||
|
||||
Takes a command (and its arguments) and runs the program under ``script``
|
||||
(emulating an interactive terminal), intercepts the output of the command
|
||||
and converts ANSI escape sequences in the output to HTML.
|
||||
"""
|
||||
captured_output = capture(command)
|
||||
converted_output = convert(captured_output)
|
||||
if connected_to_terminal():
|
||||
fd, temporary_file = tempfile.mkstemp(suffix='.html')
|
||||
with open(temporary_file, 'w') as handle:
|
||||
handle.write(converted_output)
|
||||
webbrowser.open(temporary_file)
|
||||
elif captured_output and not captured_output.isspace():
|
||||
output(converted_output)
|
||||
@@ -0,0 +1,403 @@
|
||||
# Program to convert text with ANSI escape sequences to HTML.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: February 14, 2020
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Convert text with ANSI escape sequences to HTML."""
|
||||
|
||||
# Standard library modules.
|
||||
import codecs
|
||||
import os
|
||||
import pipes
|
||||
import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.terminal import (
|
||||
ANSI_CSI,
|
||||
ANSI_TEXT_STYLES,
|
||||
clean_terminal_output,
|
||||
output,
|
||||
)
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs.converter.colors import (
|
||||
BRIGHT_COLOR_PALETTE,
|
||||
EIGHT_COLOR_PALETTE,
|
||||
EXTENDED_COLOR_PALETTE,
|
||||
)
|
||||
|
||||
# Compiled regular expression that matches leading spaces (indentation).
|
||||
INDENT_PATTERN = re.compile('^ +', re.MULTILINE)
|
||||
|
||||
# Compiled regular expression that matches a tag followed by a space at the start of a line.
|
||||
TAG_INDENT_PATTERN = re.compile('^(<[^>]+>) ', re.MULTILINE)
|
||||
|
||||
# Compiled regular expression that matches strings we want to convert. Used to
|
||||
# separate all special strings and literal output in a single pass (this allows
|
||||
# us to properly encode the output without resorting to nasty hacks).
|
||||
TOKEN_PATTERN = re.compile(r'''
|
||||
# Wrap the pattern in a capture group so that re.split() includes the
|
||||
# substrings that match the pattern in the resulting list of strings.
|
||||
(
|
||||
# Match URLs with supported schemes and domain names.
|
||||
(?: https?:// | www\\. )
|
||||
# Scan until the end of the URL by matching non-whitespace characters
|
||||
# that are also not escape characters.
|
||||
[^\s\x1b]+
|
||||
# Alternatively ...
|
||||
|
|
||||
# Match (what looks like) ANSI escape sequences.
|
||||
\x1b \[ .*? m
|
||||
)
|
||||
''', re.UNICODE | re.VERBOSE)
|
||||
|
||||
|
||||
def capture(command, encoding='UTF-8'):
|
||||
"""
|
||||
Capture the output of an external command as if it runs in an interactive terminal.
|
||||
|
||||
:param command: The command name and its arguments (a list of strings).
|
||||
:param encoding: The encoding to use to decode the output (a string).
|
||||
:returns: The output of the command.
|
||||
|
||||
This function runs an external command under ``script`` (emulating an
|
||||
interactive terminal) to capture the output of the command as if it was
|
||||
running in an interactive terminal (including ANSI escape sequences).
|
||||
"""
|
||||
with open(os.devnull, 'wb') as dev_null:
|
||||
# We start by invoking the `script' program in a form that is supported
|
||||
# by the Linux implementation [1] but fails command line validation on
|
||||
# the MacOS (BSD) implementation [2]: The command is specified using
|
||||
# the -c option and the typescript file is /dev/null.
|
||||
#
|
||||
# [1] http://man7.org/linux/man-pages/man1/script.1.html
|
||||
# [2] https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/script.1.html
|
||||
command_line = ['script', '-qc', ' '.join(map(pipes.quote, command)), '/dev/null']
|
||||
script = subprocess.Popen(command_line, stdout=subprocess.PIPE, stderr=dev_null)
|
||||
stdout, stderr = script.communicate()
|
||||
if script.returncode == 0:
|
||||
# If `script' succeeded we assume that it understood our command line
|
||||
# invocation which means it's the Linux implementation (in this case
|
||||
# we can use standard output instead of a temporary file).
|
||||
output = stdout.decode(encoding)
|
||||
else:
|
||||
# If `script' failed we assume that it didn't understand our command
|
||||
# line invocation which means it's the MacOS (BSD) implementation
|
||||
# (in this case we need a temporary file because the command line
|
||||
# interface requires it).
|
||||
fd, temporary_file = tempfile.mkstemp(prefix='coloredlogs-', suffix='-capture.txt')
|
||||
try:
|
||||
command_line = ['script', '-q', temporary_file] + list(command)
|
||||
subprocess.Popen(command_line, stdout=dev_null, stderr=dev_null).wait()
|
||||
with codecs.open(temporary_file, 'rb') as handle:
|
||||
output = handle.read()
|
||||
finally:
|
||||
os.unlink(temporary_file)
|
||||
# On MacOS when standard input is /dev/null I've observed
|
||||
# the captured output starting with the characters '^D':
|
||||
#
|
||||
# $ script -q capture.txt echo example </dev/null
|
||||
# example
|
||||
# $ xxd capture.txt
|
||||
# 00000000: 5e44 0808 6578 616d 706c 650d 0a ^D..example..
|
||||
#
|
||||
# I'm not sure why this is here, although I suppose it has to do
|
||||
# with ^D in caret notation signifying end-of-file [1]. What I do
|
||||
# know is that this is an implementation detail that callers of the
|
||||
# capture() function shouldn't be bothered with, so we strip it.
|
||||
#
|
||||
# [1] https://en.wikipedia.org/wiki/End-of-file
|
||||
if output.startswith(b'^D'):
|
||||
output = output[2:]
|
||||
output = output.decode(encoding)
|
||||
# Clean up backspace and carriage return characters and the 'erase line'
|
||||
# ANSI escape sequence and return the output as a Unicode string.
|
||||
return u'\n'.join(clean_terminal_output(output))
|
||||
|
||||
|
||||
def convert(text, code=True, tabsize=4):
|
||||
"""
|
||||
Convert text with ANSI escape sequences to HTML.
|
||||
|
||||
:param text: The text with ANSI escape sequences (a string).
|
||||
:param code: Whether to wrap the returned HTML fragment in a
|
||||
``<code>...</code>`` element (a boolean, defaults
|
||||
to :data:`True`).
|
||||
:param tabsize: Refer to :func:`str.expandtabs()` for details.
|
||||
:returns: The text converted to HTML (a string).
|
||||
"""
|
||||
output = []
|
||||
in_span = False
|
||||
compatible_text_styles = {
|
||||
# The following ANSI text styles have an obvious mapping to CSS.
|
||||
ANSI_TEXT_STYLES['bold']: {'font-weight': 'bold'},
|
||||
ANSI_TEXT_STYLES['strike_through']: {'text-decoration': 'line-through'},
|
||||
ANSI_TEXT_STYLES['underline']: {'text-decoration': 'underline'},
|
||||
}
|
||||
for token in TOKEN_PATTERN.split(text):
|
||||
if token.startswith(('http://', 'https://', 'www.')):
|
||||
url = token if '://' in token else ('http://' + token)
|
||||
token = u'<a href="%s" style="color:inherit">%s</a>' % (html_encode(url), html_encode(token))
|
||||
elif token.startswith(ANSI_CSI):
|
||||
ansi_codes = token[len(ANSI_CSI):-1].split(';')
|
||||
if all(c.isdigit() for c in ansi_codes):
|
||||
ansi_codes = list(map(int, ansi_codes))
|
||||
# First we check for a reset code to close the previous <span>
|
||||
# element. As explained on Wikipedia [1] an absence of codes
|
||||
# implies a reset code as well: "No parameters at all in ESC[m acts
|
||||
# like a 0 reset code".
|
||||
# [1] https://en.wikipedia.org/wiki/ANSI_escape_code#CSI_sequences
|
||||
if in_span and (0 in ansi_codes or not ansi_codes):
|
||||
output.append('</span>')
|
||||
in_span = False
|
||||
# Now we're ready to generate the next <span> element (if any) in
|
||||
# the knowledge that we're emitting opening <span> and closing
|
||||
# </span> tags in the correct order.
|
||||
styles = {}
|
||||
is_faint = (ANSI_TEXT_STYLES['faint'] in ansi_codes)
|
||||
is_inverse = (ANSI_TEXT_STYLES['inverse'] in ansi_codes)
|
||||
while ansi_codes:
|
||||
number = ansi_codes.pop(0)
|
||||
# Try to match a compatible text style.
|
||||
if number in compatible_text_styles:
|
||||
styles.update(compatible_text_styles[number])
|
||||
continue
|
||||
# Try to extract a text and/or background color.
|
||||
text_color = None
|
||||
background_color = None
|
||||
if 30 <= number <= 37:
|
||||
# 30-37 sets the text color from the eight color palette.
|
||||
text_color = EIGHT_COLOR_PALETTE[number - 30]
|
||||
elif 40 <= number <= 47:
|
||||
# 40-47 sets the background color from the eight color palette.
|
||||
background_color = EIGHT_COLOR_PALETTE[number - 40]
|
||||
elif 90 <= number <= 97:
|
||||
# 90-97 sets the text color from the high-intensity eight color palette.
|
||||
text_color = BRIGHT_COLOR_PALETTE[number - 90]
|
||||
elif 100 <= number <= 107:
|
||||
# 100-107 sets the background color from the high-intensity eight color palette.
|
||||
background_color = BRIGHT_COLOR_PALETTE[number - 100]
|
||||
elif number in (38, 39) and len(ansi_codes) >= 2 and ansi_codes[0] == 5:
|
||||
# 38;5;N is a text color in the 256 color mode palette,
|
||||
# 39;5;N is a background color in the 256 color mode palette.
|
||||
try:
|
||||
# Consume the 5 following 38 or 39.
|
||||
ansi_codes.pop(0)
|
||||
# Consume the 256 color mode color index.
|
||||
color_index = ansi_codes.pop(0)
|
||||
# Set the variable to the corresponding HTML/CSS color.
|
||||
if number == 38:
|
||||
text_color = EXTENDED_COLOR_PALETTE[color_index]
|
||||
elif number == 39:
|
||||
background_color = EXTENDED_COLOR_PALETTE[color_index]
|
||||
except (ValueError, IndexError):
|
||||
pass
|
||||
# Apply the 'faint' or 'inverse' text style
|
||||
# by manipulating the selected color(s).
|
||||
if text_color and is_inverse:
|
||||
# Use the text color as the background color and pick a
|
||||
# text color that will be visible on the resulting
|
||||
# background color.
|
||||
background_color = text_color
|
||||
text_color = select_text_color(*parse_hex_color(text_color))
|
||||
if text_color and is_faint:
|
||||
# Because I wasn't sure how to implement faint colors
|
||||
# based on normal colors I looked at how gnome-terminal
|
||||
# (my terminal of choice) handles this and it appears
|
||||
# to just pick a somewhat darker color.
|
||||
text_color = '#%02X%02X%02X' % tuple(
|
||||
max(0, n - 40) for n in parse_hex_color(text_color)
|
||||
)
|
||||
if text_color:
|
||||
styles['color'] = text_color
|
||||
if background_color:
|
||||
styles['background-color'] = background_color
|
||||
if styles:
|
||||
token = '<span style="%s">' % ';'.join(k + ':' + v for k, v in sorted(styles.items()))
|
||||
in_span = True
|
||||
else:
|
||||
token = ''
|
||||
else:
|
||||
token = html_encode(token)
|
||||
output.append(token)
|
||||
html = ''.join(output)
|
||||
html = encode_whitespace(html, tabsize)
|
||||
if code:
|
||||
html = '<code>%s</code>' % html
|
||||
return html
|
||||
|
||||
|
||||
def encode_whitespace(text, tabsize=4):
|
||||
"""
|
||||
Encode whitespace so that web browsers properly render it.
|
||||
|
||||
:param text: The plain text (a string).
|
||||
:param tabsize: Refer to :func:`str.expandtabs()` for details.
|
||||
:returns: The text converted to HTML (a string).
|
||||
|
||||
The purpose of this function is to encode whitespace in such a way that web
|
||||
browsers render the same whitespace regardless of whether 'preformatted'
|
||||
styling is used (by wrapping the text in a ``<pre>...</pre>`` element).
|
||||
|
||||
.. note:: While the string manipulation performed by this function is
|
||||
specifically intended not to corrupt the HTML generated by
|
||||
:func:`convert()` it definitely does have the potential to
|
||||
corrupt HTML from other sources. You have been warned :-).
|
||||
"""
|
||||
# Convert Windows line endings (CR+LF) to UNIX line endings (LF).
|
||||
text = text.replace('\r\n', '\n')
|
||||
# Convert UNIX line endings (LF) to HTML line endings (<br>).
|
||||
text = text.replace('\n', '<br>\n')
|
||||
# Convert tabs to spaces.
|
||||
text = text.expandtabs(tabsize)
|
||||
# Convert leading spaces (that is to say spaces at the start of the string
|
||||
# and/or directly after a line ending) into non-breaking spaces, otherwise
|
||||
# HTML rendering engines will simply ignore these spaces.
|
||||
text = re.sub(INDENT_PATTERN, encode_whitespace_cb, text)
|
||||
# The conversion of leading spaces we just did misses a corner case where a
|
||||
# line starts with an HTML tag but the first visible text is a space. Web
|
||||
# browsers seem to ignore these spaces, so we need to convert them.
|
||||
text = re.sub(TAG_INDENT_PATTERN, r'\1 ', text)
|
||||
# Convert runs of multiple spaces into non-breaking spaces to avoid HTML
|
||||
# rendering engines from visually collapsing runs of spaces into a single
|
||||
# space. We specifically don't replace single spaces for several reasons:
|
||||
# 1. We'd break the HTML emitted by convert() by replacing spaces
|
||||
# inside HTML elements (for example the spaces that separate
|
||||
# element names from attribute names).
|
||||
# 2. If every single space is replaced by a non-breaking space,
|
||||
# web browsers perform awkwardly unintuitive word wrapping.
|
||||
# 3. The HTML output would be bloated for no good reason.
|
||||
text = re.sub(' {2,}', encode_whitespace_cb, text)
|
||||
return text
|
||||
|
||||
|
||||
def encode_whitespace_cb(match):
|
||||
"""
|
||||
Replace runs of multiple spaces with non-breaking spaces.
|
||||
|
||||
:param match: A regular expression match object.
|
||||
:returns: The replacement string.
|
||||
|
||||
This function is used by func:`encode_whitespace()` as a callback for
|
||||
replacement using a regular expression pattern.
|
||||
"""
|
||||
return ' ' * len(match.group(0))
|
||||
|
||||
|
||||
def html_encode(text):
|
||||
"""
|
||||
Encode characters with a special meaning as HTML.
|
||||
|
||||
:param text: The plain text (a string).
|
||||
:returns: The text converted to HTML (a string).
|
||||
"""
|
||||
text = text.replace('&', '&')
|
||||
text = text.replace('<', '<')
|
||||
text = text.replace('>', '>')
|
||||
text = text.replace('"', '"')
|
||||
return text
|
||||
|
||||
|
||||
def parse_hex_color(value):
|
||||
"""
|
||||
Convert a CSS color in hexadecimal notation into its R, G, B components.
|
||||
|
||||
:param value: A CSS color in hexadecimal notation (a string like '#000000').
|
||||
:return: A tuple with three integers (with values between 0 and 255)
|
||||
corresponding to the R, G and B components of the color.
|
||||
:raises: :exc:`~exceptions.ValueError` on values that can't be parsed.
|
||||
"""
|
||||
if value.startswith('#'):
|
||||
value = value[1:]
|
||||
if len(value) == 3:
|
||||
return (
|
||||
int(value[0] * 2, 16),
|
||||
int(value[1] * 2, 16),
|
||||
int(value[2] * 2, 16),
|
||||
)
|
||||
elif len(value) == 6:
|
||||
return (
|
||||
int(value[0:2], 16),
|
||||
int(value[2:4], 16),
|
||||
int(value[4:6], 16),
|
||||
)
|
||||
else:
|
||||
raise ValueError()
|
||||
|
||||
|
||||
def select_text_color(r, g, b):
|
||||
"""
|
||||
Choose a suitable color for the inverse text style.
|
||||
|
||||
:param r: The amount of red (an integer between 0 and 255).
|
||||
:param g: The amount of green (an integer between 0 and 255).
|
||||
:param b: The amount of blue (an integer between 0 and 255).
|
||||
:returns: A CSS color in hexadecimal notation (a string).
|
||||
|
||||
In inverse mode the color that is normally used for the text is instead
|
||||
used for the background, however this can render the text unreadable. The
|
||||
purpose of :func:`select_text_color()` is to make an effort to select a
|
||||
suitable text color. Based on http://stackoverflow.com/a/3943023/112731.
|
||||
"""
|
||||
return '#000' if (r * 0.299 + g * 0.587 + b * 0.114) > 186 else '#FFF'
|
||||
|
||||
|
||||
class ColoredCronMailer(object):
|
||||
|
||||
"""
|
||||
Easy to use integration between :mod:`coloredlogs` and the UNIX ``cron`` daemon.
|
||||
|
||||
By using :class:`ColoredCronMailer` as a context manager in the command
|
||||
line interface of your Python program you make it trivially easy for users
|
||||
of your program to opt in to HTML output under ``cron``: The only thing the
|
||||
user needs to do is set ``CONTENT_TYPE="text/html"`` in their crontab!
|
||||
|
||||
Under the hood this requires quite a bit of magic and I must admit that I
|
||||
developed this code simply because I was curious whether it could even be
|
||||
done :-). It requires my :mod:`capturer` package which you can install
|
||||
using ``pip install 'coloredlogs[cron]'``. The ``[cron]`` extra will pull
|
||||
in the :mod:`capturer` 2.4 or newer which is required to capture the output
|
||||
while silencing it - otherwise you'd get duplicate output in the emails
|
||||
sent by ``cron``.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize output capturing when running under ``cron`` with the correct configuration."""
|
||||
self.is_enabled = 'text/html' in os.environ.get('CONTENT_TYPE', 'text/plain')
|
||||
self.is_silent = False
|
||||
if self.is_enabled:
|
||||
# We import capturer here so that the coloredlogs[cron] extra
|
||||
# isn't required to use the other functions in this module.
|
||||
from capturer import CaptureOutput
|
||||
self.capturer = CaptureOutput(merged=True, relay=False)
|
||||
|
||||
def __enter__(self):
|
||||
"""Start capturing output (when applicable)."""
|
||||
if self.is_enabled:
|
||||
self.capturer.__enter__()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
|
||||
"""Stop capturing output and convert the output to HTML (when applicable)."""
|
||||
if self.is_enabled:
|
||||
if not self.is_silent:
|
||||
# Only call output() when we captured something useful.
|
||||
text = self.capturer.get_text()
|
||||
if text and not text.isspace():
|
||||
output(convert(text))
|
||||
self.capturer.__exit__(exc_type, exc_value, traceback)
|
||||
|
||||
def silence(self):
|
||||
"""
|
||||
Tell :func:`__exit__()` to swallow all output (things will be silent).
|
||||
|
||||
This can be useful when a Python program is written in such a way that
|
||||
it has already produced output by the time it becomes apparent that
|
||||
nothing useful can be done (say in a cron job that runs every few
|
||||
minutes :-p). By calling :func:`silence()` the output can be swallowed
|
||||
retroactively, avoiding useless emails from ``cron``.
|
||||
"""
|
||||
self.is_silent = True
|
||||
@@ -0,0 +1,310 @@
|
||||
# Mapping of ANSI color codes to HTML/CSS colors.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: January 14, 2018
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Mapping of ANSI color codes to HTML/CSS colors."""
|
||||
|
||||
EIGHT_COLOR_PALETTE = (
|
||||
'#010101', # black
|
||||
'#DE382B', # red
|
||||
'#39B54A', # green
|
||||
'#FFC706', # yellow
|
||||
'#006FB8', # blue
|
||||
'#762671', # magenta
|
||||
'#2CB5E9', # cyan
|
||||
'#CCC', # white
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping basic color codes to CSS colors.
|
||||
|
||||
The items in this tuple correspond to the eight basic color codes for black,
|
||||
red, green, yellow, blue, magenta, cyan and white as defined in the original
|
||||
standard for ANSI escape sequences. The CSS colors are based on the `Ubuntu
|
||||
color scheme`_ described on Wikipedia and they are encoded as hexadecimal
|
||||
values to get the shortest strings, which reduces the size (in bytes) of
|
||||
conversion output.
|
||||
|
||||
.. _Ubuntu color scheme: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
|
||||
"""
|
||||
|
||||
BRIGHT_COLOR_PALETTE = (
|
||||
'#808080', # black
|
||||
'#F00', # red
|
||||
'#0F0', # green
|
||||
'#FF0', # yellow
|
||||
'#00F', # blue
|
||||
'#F0F', # magenta
|
||||
'#0FF', # cyan
|
||||
'#FFF', # white
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping bright color codes to CSS colors.
|
||||
|
||||
This tuple maps the bright color variants of :data:`EIGHT_COLOR_PALETTE`.
|
||||
"""
|
||||
|
||||
EXTENDED_COLOR_PALETTE = (
|
||||
'#000000',
|
||||
'#800000',
|
||||
'#008000',
|
||||
'#808000',
|
||||
'#000080',
|
||||
'#800080',
|
||||
'#008080',
|
||||
'#C0C0C0',
|
||||
'#808080',
|
||||
'#FF0000',
|
||||
'#00FF00',
|
||||
'#FFFF00',
|
||||
'#0000FF',
|
||||
'#FF00FF',
|
||||
'#00FFFF',
|
||||
'#FFFFFF',
|
||||
'#000000',
|
||||
'#00005F',
|
||||
'#000087',
|
||||
'#0000AF',
|
||||
'#0000D7',
|
||||
'#0000FF',
|
||||
'#005F00',
|
||||
'#005F5F',
|
||||
'#005F87',
|
||||
'#005FAF',
|
||||
'#005FD7',
|
||||
'#005FFF',
|
||||
'#008700',
|
||||
'#00875F',
|
||||
'#008787',
|
||||
'#0087AF',
|
||||
'#0087D7',
|
||||
'#0087FF',
|
||||
'#00AF00',
|
||||
'#00AF5F',
|
||||
'#00AF87',
|
||||
'#00AFAF',
|
||||
'#00AFD7',
|
||||
'#00AFFF',
|
||||
'#00D700',
|
||||
'#00D75F',
|
||||
'#00D787',
|
||||
'#00D7AF',
|
||||
'#00D7D7',
|
||||
'#00D7FF',
|
||||
'#00FF00',
|
||||
'#00FF5F',
|
||||
'#00FF87',
|
||||
'#00FFAF',
|
||||
'#00FFD7',
|
||||
'#00FFFF',
|
||||
'#5F0000',
|
||||
'#5F005F',
|
||||
'#5F0087',
|
||||
'#5F00AF',
|
||||
'#5F00D7',
|
||||
'#5F00FF',
|
||||
'#5F5F00',
|
||||
'#5F5F5F',
|
||||
'#5F5F87',
|
||||
'#5F5FAF',
|
||||
'#5F5FD7',
|
||||
'#5F5FFF',
|
||||
'#5F8700',
|
||||
'#5F875F',
|
||||
'#5F8787',
|
||||
'#5F87AF',
|
||||
'#5F87D7',
|
||||
'#5F87FF',
|
||||
'#5FAF00',
|
||||
'#5FAF5F',
|
||||
'#5FAF87',
|
||||
'#5FAFAF',
|
||||
'#5FAFD7',
|
||||
'#5FAFFF',
|
||||
'#5FD700',
|
||||
'#5FD75F',
|
||||
'#5FD787',
|
||||
'#5FD7AF',
|
||||
'#5FD7D7',
|
||||
'#5FD7FF',
|
||||
'#5FFF00',
|
||||
'#5FFF5F',
|
||||
'#5FFF87',
|
||||
'#5FFFAF',
|
||||
'#5FFFD7',
|
||||
'#5FFFFF',
|
||||
'#870000',
|
||||
'#87005F',
|
||||
'#870087',
|
||||
'#8700AF',
|
||||
'#8700D7',
|
||||
'#8700FF',
|
||||
'#875F00',
|
||||
'#875F5F',
|
||||
'#875F87',
|
||||
'#875FAF',
|
||||
'#875FD7',
|
||||
'#875FFF',
|
||||
'#878700',
|
||||
'#87875F',
|
||||
'#878787',
|
||||
'#8787AF',
|
||||
'#8787D7',
|
||||
'#8787FF',
|
||||
'#87AF00',
|
||||
'#87AF5F',
|
||||
'#87AF87',
|
||||
'#87AFAF',
|
||||
'#87AFD7',
|
||||
'#87AFFF',
|
||||
'#87D700',
|
||||
'#87D75F',
|
||||
'#87D787',
|
||||
'#87D7AF',
|
||||
'#87D7D7',
|
||||
'#87D7FF',
|
||||
'#87FF00',
|
||||
'#87FF5F',
|
||||
'#87FF87',
|
||||
'#87FFAF',
|
||||
'#87FFD7',
|
||||
'#87FFFF',
|
||||
'#AF0000',
|
||||
'#AF005F',
|
||||
'#AF0087',
|
||||
'#AF00AF',
|
||||
'#AF00D7',
|
||||
'#AF00FF',
|
||||
'#AF5F00',
|
||||
'#AF5F5F',
|
||||
'#AF5F87',
|
||||
'#AF5FAF',
|
||||
'#AF5FD7',
|
||||
'#AF5FFF',
|
||||
'#AF8700',
|
||||
'#AF875F',
|
||||
'#AF8787',
|
||||
'#AF87AF',
|
||||
'#AF87D7',
|
||||
'#AF87FF',
|
||||
'#AFAF00',
|
||||
'#AFAF5F',
|
||||
'#AFAF87',
|
||||
'#AFAFAF',
|
||||
'#AFAFD7',
|
||||
'#AFAFFF',
|
||||
'#AFD700',
|
||||
'#AFD75F',
|
||||
'#AFD787',
|
||||
'#AFD7AF',
|
||||
'#AFD7D7',
|
||||
'#AFD7FF',
|
||||
'#AFFF00',
|
||||
'#AFFF5F',
|
||||
'#AFFF87',
|
||||
'#AFFFAF',
|
||||
'#AFFFD7',
|
||||
'#AFFFFF',
|
||||
'#D70000',
|
||||
'#D7005F',
|
||||
'#D70087',
|
||||
'#D700AF',
|
||||
'#D700D7',
|
||||
'#D700FF',
|
||||
'#D75F00',
|
||||
'#D75F5F',
|
||||
'#D75F87',
|
||||
'#D75FAF',
|
||||
'#D75FD7',
|
||||
'#D75FFF',
|
||||
'#D78700',
|
||||
'#D7875F',
|
||||
'#D78787',
|
||||
'#D787AF',
|
||||
'#D787D7',
|
||||
'#D787FF',
|
||||
'#D7AF00',
|
||||
'#D7AF5F',
|
||||
'#D7AF87',
|
||||
'#D7AFAF',
|
||||
'#D7AFD7',
|
||||
'#D7AFFF',
|
||||
'#D7D700',
|
||||
'#D7D75F',
|
||||
'#D7D787',
|
||||
'#D7D7AF',
|
||||
'#D7D7D7',
|
||||
'#D7D7FF',
|
||||
'#D7FF00',
|
||||
'#D7FF5F',
|
||||
'#D7FF87',
|
||||
'#D7FFAF',
|
||||
'#D7FFD7',
|
||||
'#D7FFFF',
|
||||
'#FF0000',
|
||||
'#FF005F',
|
||||
'#FF0087',
|
||||
'#FF00AF',
|
||||
'#FF00D7',
|
||||
'#FF00FF',
|
||||
'#FF5F00',
|
||||
'#FF5F5F',
|
||||
'#FF5F87',
|
||||
'#FF5FAF',
|
||||
'#FF5FD7',
|
||||
'#FF5FFF',
|
||||
'#FF8700',
|
||||
'#FF875F',
|
||||
'#FF8787',
|
||||
'#FF87AF',
|
||||
'#FF87D7',
|
||||
'#FF87FF',
|
||||
'#FFAF00',
|
||||
'#FFAF5F',
|
||||
'#FFAF87',
|
||||
'#FFAFAF',
|
||||
'#FFAFD7',
|
||||
'#FFAFFF',
|
||||
'#FFD700',
|
||||
'#FFD75F',
|
||||
'#FFD787',
|
||||
'#FFD7AF',
|
||||
'#FFD7D7',
|
||||
'#FFD7FF',
|
||||
'#FFFF00',
|
||||
'#FFFF5F',
|
||||
'#FFFF87',
|
||||
'#FFFFAF',
|
||||
'#FFFFD7',
|
||||
'#FFFFFF',
|
||||
'#080808',
|
||||
'#121212',
|
||||
'#1C1C1C',
|
||||
'#262626',
|
||||
'#303030',
|
||||
'#3A3A3A',
|
||||
'#444444',
|
||||
'#4E4E4E',
|
||||
'#585858',
|
||||
'#626262',
|
||||
'#6C6C6C',
|
||||
'#767676',
|
||||
'#808080',
|
||||
'#8A8A8A',
|
||||
'#949494',
|
||||
'#9E9E9E',
|
||||
'#A8A8A8',
|
||||
'#B2B2B2',
|
||||
'#BCBCBC',
|
||||
'#C6C6C6',
|
||||
'#D0D0D0',
|
||||
'#DADADA',
|
||||
'#E4E4E4',
|
||||
'#EEEEEE',
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping 256 color mode color codes to CSS colors.
|
||||
|
||||
The items in this tuple correspond to the color codes in the 256 color mode palette.
|
||||
"""
|
||||
@@ -0,0 +1,49 @@
|
||||
# Demonstration of the coloredlogs package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: January 14, 2018
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""A simple demonstration of the `coloredlogs` package."""
|
||||
|
||||
# Standard library modules.
|
||||
import os
|
||||
import time
|
||||
|
||||
# Modules included in our package.
|
||||
import coloredlogs
|
||||
|
||||
# If my verbose logger is installed, we'll use that for the demo.
|
||||
try:
|
||||
from verboselogs import VerboseLogger as getLogger
|
||||
except ImportError:
|
||||
from logging import getLogger
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = getLogger(__name__)
|
||||
|
||||
DEMO_DELAY = float(os.environ.get('COLOREDLOGS_DEMO_DELAY', '1'))
|
||||
"""The number of seconds between each message emitted by :func:`demonstrate_colored_logging()`."""
|
||||
|
||||
|
||||
def demonstrate_colored_logging():
|
||||
"""Interactively demonstrate the :mod:`coloredlogs` package."""
|
||||
# Determine the available logging levels and order them by numeric value.
|
||||
decorated_levels = []
|
||||
defined_levels = coloredlogs.find_defined_levels()
|
||||
normalizer = coloredlogs.NameNormalizer()
|
||||
for name, level in defined_levels.items():
|
||||
if name != 'NOTSET':
|
||||
item = (level, normalizer.normalize_name(name))
|
||||
if item not in decorated_levels:
|
||||
decorated_levels.append(item)
|
||||
ordered_levels = sorted(decorated_levels)
|
||||
# Initialize colored output to the terminal, default to the most
|
||||
# verbose logging level but enable the user the customize it.
|
||||
coloredlogs.install(level=os.environ.get('COLOREDLOGS_LOG_LEVEL', ordered_levels[0][1]))
|
||||
# Print some examples with different timestamps.
|
||||
for level, name in ordered_levels:
|
||||
log_method = getattr(logger, name, None)
|
||||
if log_method:
|
||||
log_method("message with level %s (%i)", name, level)
|
||||
time.sleep(DEMO_DELAY)
|
||||
@@ -0,0 +1,292 @@
|
||||
# Easy to use system logging for Python's logging module.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: December 10, 2020
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""
|
||||
Easy to use UNIX system logging for Python's :mod:`logging` module.
|
||||
|
||||
Admittedly system logging has little to do with colored terminal output, however:
|
||||
|
||||
- The `coloredlogs` package is my attempt to do Python logging right and system
|
||||
logging is an important part of that equation.
|
||||
|
||||
- I've seen a surprising number of quirks and mistakes in system logging done
|
||||
in Python, for example including ``%(asctime)s`` in a format string (the
|
||||
system logging daemon is responsible for adding timestamps and thus you end
|
||||
up with duplicate timestamps that make the logs awful to read :-).
|
||||
|
||||
- The ``%(programname)s`` filter originated in my system logging code and I
|
||||
wanted it in `coloredlogs` so the step to include this module wasn't that big.
|
||||
|
||||
- As a bonus this Python module now has a test suite and proper documentation.
|
||||
|
||||
So there :-P. Go take a look at :func:`enable_system_logging()`.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly import coerce_boolean
|
||||
from humanfriendly.compat import on_macos, on_windows
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs import (
|
||||
DEFAULT_LOG_LEVEL,
|
||||
ProgramNameFilter,
|
||||
adjust_level,
|
||||
find_program_name,
|
||||
level_to_number,
|
||||
replace_handler,
|
||||
)
|
||||
|
||||
LOG_DEVICE_MACOSX = '/var/run/syslog'
|
||||
"""The pathname of the log device on Mac OS X (a string)."""
|
||||
|
||||
LOG_DEVICE_UNIX = '/dev/log'
|
||||
"""The pathname of the log device on Linux and most other UNIX systems (a string)."""
|
||||
|
||||
DEFAULT_LOG_FORMAT = '%(programname)s[%(process)d]: %(levelname)s %(message)s'
|
||||
"""
|
||||
The default format for log messages sent to the system log (a string).
|
||||
|
||||
The ``%(programname)s`` format requires :class:`~coloredlogs.ProgramNameFilter`
|
||||
but :func:`enable_system_logging()` takes care of this for you.
|
||||
|
||||
The ``name[pid]:`` construct (specifically the colon) in the format allows
|
||||
rsyslogd_ to extract the ``$programname`` from each log message, which in turn
|
||||
allows configuration files in ``/etc/rsyslog.d/*.conf`` to filter these log
|
||||
messages to a separate log file (if the need arises).
|
||||
|
||||
.. _rsyslogd: https://en.wikipedia.org/wiki/Rsyslog
|
||||
"""
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SystemLogging(object):
|
||||
|
||||
"""Context manager to enable system logging."""
|
||||
|
||||
def __init__(self, *args, **kw):
|
||||
"""
|
||||
Initialize a :class:`SystemLogging` object.
|
||||
|
||||
:param args: Positional arguments to :func:`enable_system_logging()`.
|
||||
:param kw: Keyword arguments to :func:`enable_system_logging()`.
|
||||
"""
|
||||
self.args = args
|
||||
self.kw = kw
|
||||
self.handler = None
|
||||
|
||||
def __enter__(self):
|
||||
"""Enable system logging when entering the context."""
|
||||
if self.handler is None:
|
||||
self.handler = enable_system_logging(*self.args, **self.kw)
|
||||
return self.handler
|
||||
|
||||
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
|
||||
"""
|
||||
Disable system logging when leaving the context.
|
||||
|
||||
.. note:: If an exception is being handled when we leave the context a
|
||||
warning message including traceback is logged *before* system
|
||||
logging is disabled.
|
||||
"""
|
||||
if self.handler is not None:
|
||||
if exc_type is not None:
|
||||
logger.warning("Disabling system logging due to unhandled exception!", exc_info=True)
|
||||
(self.kw.get('logger') or logging.getLogger()).removeHandler(self.handler)
|
||||
self.handler = None
|
||||
|
||||
|
||||
def enable_system_logging(programname=None, fmt=None, logger=None, reconfigure=True, **kw):
|
||||
"""
|
||||
Redirect :mod:`logging` messages to the system log (e.g. ``/var/log/syslog``).
|
||||
|
||||
:param programname: The program name to embed in log messages (a string, defaults
|
||||
to the result of :func:`~coloredlogs.find_program_name()`).
|
||||
:param fmt: The log format for system log messages (a string, defaults to
|
||||
:data:`DEFAULT_LOG_FORMAT`).
|
||||
:param logger: The logger to which the :class:`~logging.handlers.SysLogHandler`
|
||||
should be connected (defaults to the root logger).
|
||||
:param level: The logging level for the :class:`~logging.handlers.SysLogHandler`
|
||||
(defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced
|
||||
using :func:`~coloredlogs.level_to_number()`.
|
||||
:param reconfigure: If :data:`True` (the default) multiple calls to
|
||||
:func:`enable_system_logging()` will each override
|
||||
the previous configuration.
|
||||
:param kw: Refer to :func:`connect_to_syslog()`.
|
||||
:returns: A :class:`~logging.handlers.SysLogHandler` object or
|
||||
:data:`None`. If an existing handler is found and `reconfigure`
|
||||
is :data:`False` the existing handler object is returned. If the
|
||||
connection to the system logging daemon fails :data:`None` is
|
||||
returned.
|
||||
|
||||
As of release 15.0 this function uses :func:`is_syslog_supported()` to
|
||||
check whether system logging is supported and appropriate before it's
|
||||
enabled.
|
||||
|
||||
.. note:: When the logger's effective level is too restrictive it is
|
||||
relaxed (refer to `notes about log levels`_ for details).
|
||||
"""
|
||||
# Check whether system logging is supported / appropriate.
|
||||
if not is_syslog_supported():
|
||||
return None
|
||||
# Provide defaults for omitted arguments.
|
||||
programname = programname or find_program_name()
|
||||
logger = logger or logging.getLogger()
|
||||
fmt = fmt or DEFAULT_LOG_FORMAT
|
||||
level = level_to_number(kw.get('level', DEFAULT_LOG_LEVEL))
|
||||
# Check whether system logging is already enabled.
|
||||
handler, logger = replace_handler(logger, match_syslog_handler, reconfigure)
|
||||
# Make sure reconfiguration is allowed or not relevant.
|
||||
if not (handler and not reconfigure):
|
||||
# Create a system logging handler.
|
||||
handler = connect_to_syslog(**kw)
|
||||
# Make sure the handler was successfully created.
|
||||
if handler:
|
||||
# Enable the use of %(programname)s.
|
||||
ProgramNameFilter.install(handler=handler, fmt=fmt, programname=programname)
|
||||
# Connect the formatter, handler and logger.
|
||||
handler.setFormatter(logging.Formatter(fmt))
|
||||
logger.addHandler(handler)
|
||||
# Adjust the level of the selected logger.
|
||||
adjust_level(logger, level)
|
||||
return handler
|
||||
|
||||
|
||||
def connect_to_syslog(address=None, facility=None, level=None):
|
||||
"""
|
||||
Create a :class:`~logging.handlers.SysLogHandler`.
|
||||
|
||||
:param address: The device file or network address of the system logging
|
||||
daemon (a string or tuple, defaults to the result of
|
||||
:func:`find_syslog_address()`).
|
||||
:param facility: Refer to :class:`~logging.handlers.SysLogHandler`.
|
||||
Defaults to ``LOG_USER``.
|
||||
:param level: The logging level for the :class:`~logging.handlers.SysLogHandler`
|
||||
(defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced
|
||||
using :func:`~coloredlogs.level_to_number()`.
|
||||
:returns: A :class:`~logging.handlers.SysLogHandler` object or :data:`None` (if the
|
||||
system logging daemon is unavailable).
|
||||
|
||||
The process of connecting to the system logging daemon goes as follows:
|
||||
|
||||
- The following two socket types are tried (in decreasing preference):
|
||||
|
||||
1. :data:`~socket.SOCK_RAW` avoids truncation of log messages but may
|
||||
not be supported.
|
||||
2. :data:`~socket.SOCK_STREAM` (TCP) supports longer messages than the
|
||||
default (which is UDP).
|
||||
"""
|
||||
if not address:
|
||||
address = find_syslog_address()
|
||||
if facility is None:
|
||||
facility = logging.handlers.SysLogHandler.LOG_USER
|
||||
if level is None:
|
||||
level = DEFAULT_LOG_LEVEL
|
||||
for socktype in socket.SOCK_RAW, socket.SOCK_STREAM, None:
|
||||
kw = dict(facility=facility, address=address)
|
||||
if socktype is not None:
|
||||
kw['socktype'] = socktype
|
||||
try:
|
||||
handler = logging.handlers.SysLogHandler(**kw)
|
||||
except IOError:
|
||||
# IOError is a superclass of socket.error which can be raised if the system
|
||||
# logging daemon is unavailable.
|
||||
pass
|
||||
else:
|
||||
handler.setLevel(level_to_number(level))
|
||||
return handler
|
||||
|
||||
|
||||
def find_syslog_address():
|
||||
"""
|
||||
Find the most suitable destination for system log messages.
|
||||
|
||||
:returns: The pathname of a log device (a string) or an address/port tuple as
|
||||
supported by :class:`~logging.handlers.SysLogHandler`.
|
||||
|
||||
On Mac OS X this prefers :data:`LOG_DEVICE_MACOSX`, after that :data:`LOG_DEVICE_UNIX`
|
||||
is checked for existence. If both of these device files don't exist the default used
|
||||
by :class:`~logging.handlers.SysLogHandler` is returned.
|
||||
"""
|
||||
if sys.platform == 'darwin' and os.path.exists(LOG_DEVICE_MACOSX):
|
||||
return LOG_DEVICE_MACOSX
|
||||
elif os.path.exists(LOG_DEVICE_UNIX):
|
||||
return LOG_DEVICE_UNIX
|
||||
else:
|
||||
return 'localhost', logging.handlers.SYSLOG_UDP_PORT
|
||||
|
||||
|
||||
def is_syslog_supported():
|
||||
"""
|
||||
Determine whether system logging is supported.
|
||||
|
||||
:returns:
|
||||
|
||||
:data:`True` if system logging is supported and can be enabled,
|
||||
:data:`False` if system logging is not supported or there are good
|
||||
reasons for not enabling it.
|
||||
|
||||
The decision making process here is as follows:
|
||||
|
||||
Override
|
||||
If the environment variable ``$COLOREDLOGS_SYSLOG`` is set it is evaluated
|
||||
using :func:`~humanfriendly.coerce_boolean()` and the resulting value
|
||||
overrides the platform detection discussed below, this allows users to
|
||||
override the decision making process if they disagree / know better.
|
||||
|
||||
Linux / UNIX
|
||||
On systems that are not Windows or MacOS (see below) we assume UNIX which
|
||||
means either syslog is available or sending a bunch of UDP packets to
|
||||
nowhere won't hurt anyone...
|
||||
|
||||
Microsoft Windows
|
||||
Over the years I've had multiple reports of :pypi:`coloredlogs` spewing
|
||||
extremely verbose errno 10057 warning messages to the console (once for
|
||||
each log message I suppose) so I now assume it a default that
|
||||
"syslog-style system logging" is not generally available on Windows.
|
||||
|
||||
Apple MacOS
|
||||
There's cPython issue `#38780`_ which seems to result in a fatal exception
|
||||
when the Python interpreter shuts down. This is (way) worse than not
|
||||
having system logging enabled. The error message mentioned in `#38780`_
|
||||
has actually been following me around for years now, see for example:
|
||||
|
||||
- https://github.com/xolox/python-rotate-backups/issues/9 mentions Docker
|
||||
images implying Linux, so not strictly the same as `#38780`_.
|
||||
|
||||
- https://github.com/xolox/python-npm-accel/issues/4 is definitely related
|
||||
to `#38780`_ and is what eventually prompted me to add the
|
||||
:func:`is_syslog_supported()` logic.
|
||||
|
||||
.. _#38780: https://bugs.python.org/issue38780
|
||||
"""
|
||||
override = os.environ.get("COLOREDLOGS_SYSLOG")
|
||||
if override is not None:
|
||||
return coerce_boolean(override)
|
||||
else:
|
||||
return not (on_windows() or on_macos())
|
||||
|
||||
|
||||
def match_syslog_handler(handler):
|
||||
"""
|
||||
Identify system logging handlers.
|
||||
|
||||
:param handler: The :class:`~logging.Handler` class to check.
|
||||
:returns: :data:`True` if the handler is a
|
||||
:class:`~logging.handlers.SysLogHandler`,
|
||||
:data:`False` otherwise.
|
||||
|
||||
This function can be used as a callback for :func:`.find_handler()`.
|
||||
"""
|
||||
return isinstance(handler, logging.handlers.SysLogHandler)
|
||||
@@ -0,0 +1,673 @@
|
||||
# Automated tests for the `coloredlogs' package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: June 11, 2021
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Automated tests for the `coloredlogs` package."""
|
||||
|
||||
# Standard library modules.
|
||||
import contextlib
|
||||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.compat import StringIO
|
||||
from humanfriendly.terminal import ANSI_COLOR_CODES, ANSI_CSI, ansi_style, ansi_wrap
|
||||
from humanfriendly.testing import PatchedAttribute, PatchedItem, TestCase, retry
|
||||
from humanfriendly.text import format, random_string
|
||||
|
||||
# The module we're testing.
|
||||
import coloredlogs
|
||||
import coloredlogs.cli
|
||||
from coloredlogs import (
|
||||
CHROOT_FILES,
|
||||
ColoredFormatter,
|
||||
NameNormalizer,
|
||||
decrease_verbosity,
|
||||
find_defined_levels,
|
||||
find_handler,
|
||||
find_hostname,
|
||||
find_program_name,
|
||||
find_username,
|
||||
get_level,
|
||||
increase_verbosity,
|
||||
install,
|
||||
is_verbose,
|
||||
level_to_number,
|
||||
match_stream_handler,
|
||||
parse_encoded_styles,
|
||||
set_level,
|
||||
walk_propagation_tree,
|
||||
)
|
||||
from coloredlogs.demo import demonstrate_colored_logging
|
||||
from coloredlogs.syslog import SystemLogging, is_syslog_supported, match_syslog_handler
|
||||
from coloredlogs.converter import (
|
||||
ColoredCronMailer,
|
||||
EIGHT_COLOR_PALETTE,
|
||||
capture,
|
||||
convert,
|
||||
)
|
||||
|
||||
# External test dependencies.
|
||||
from capturer import CaptureOutput
|
||||
from verboselogs import VerboseLogger
|
||||
|
||||
# Compiled regular expression that matches a single line of output produced by
|
||||
# the default log format (does not include matching of ANSI escape sequences).
|
||||
PLAIN_TEXT_PATTERN = re.compile(r'''
|
||||
(?P<date> \d{4}-\d{2}-\d{2} )
|
||||
\s (?P<time> \d{2}:\d{2}:\d{2} )
|
||||
\s (?P<hostname> \S+ )
|
||||
\s (?P<logger_name> \w+ )
|
||||
\[ (?P<process_id> \d+ ) \]
|
||||
\s (?P<severity> [A-Z]+ )
|
||||
\s (?P<message> .* )
|
||||
''', re.VERBOSE)
|
||||
|
||||
# Compiled regular expression that matches a single line of output produced by
|
||||
# the default log format with milliseconds=True.
|
||||
PATTERN_INCLUDING_MILLISECONDS = re.compile(r'''
|
||||
(?P<date> \d{4}-\d{2}-\d{2} )
|
||||
\s (?P<time> \d{2}:\d{2}:\d{2},\d{3} )
|
||||
\s (?P<hostname> \S+ )
|
||||
\s (?P<logger_name> \w+ )
|
||||
\[ (?P<process_id> \d+ ) \]
|
||||
\s (?P<severity> [A-Z]+ )
|
||||
\s (?P<message> .* )
|
||||
''', re.VERBOSE)
|
||||
|
||||
|
||||
def setUpModule():
|
||||
"""Speed up the tests by disabling the demo's artificial delay."""
|
||||
os.environ['COLOREDLOGS_DEMO_DELAY'] = '0'
|
||||
coloredlogs.demo.DEMO_DELAY = 0
|
||||
|
||||
|
||||
class ColoredLogsTestCase(TestCase):
|
||||
|
||||
"""Container for the `coloredlogs` tests."""
|
||||
|
||||
def find_system_log(self):
|
||||
"""Find the system log file or skip the current test."""
|
||||
filename = ('/var/log/system.log' if sys.platform == 'darwin' else (
|
||||
'/var/log/syslog' if 'linux' in sys.platform else None
|
||||
))
|
||||
if not filename:
|
||||
self.skipTest("Location of system log file unknown!")
|
||||
elif not os.path.isfile(filename):
|
||||
self.skipTest("System log file not found! (%s)" % filename)
|
||||
elif not os.access(filename, os.R_OK):
|
||||
self.skipTest("Insufficient permissions to read system log file! (%s)" % filename)
|
||||
else:
|
||||
return filename
|
||||
|
||||
def test_level_to_number(self):
|
||||
"""Make sure :func:`level_to_number()` works as intended."""
|
||||
# Make sure the default levels are translated as expected.
|
||||
assert level_to_number('debug') == logging.DEBUG
|
||||
assert level_to_number('info') == logging.INFO
|
||||
assert level_to_number('warning') == logging.WARNING
|
||||
assert level_to_number('error') == logging.ERROR
|
||||
assert level_to_number('fatal') == logging.FATAL
|
||||
# Make sure bogus level names don't blow up.
|
||||
assert level_to_number('bogus-level') == logging.INFO
|
||||
|
||||
def test_find_hostname(self):
|
||||
"""Make sure :func:`~find_hostname()` works correctly."""
|
||||
assert find_hostname()
|
||||
# Create a temporary file as a placeholder for e.g. /etc/debian_chroot.
|
||||
fd, temporary_file = tempfile.mkstemp()
|
||||
try:
|
||||
with open(temporary_file, 'w') as handle:
|
||||
handle.write('first line\n')
|
||||
handle.write('second line\n')
|
||||
CHROOT_FILES.insert(0, temporary_file)
|
||||
# Make sure the chroot file is being read.
|
||||
assert find_hostname() == 'first line'
|
||||
finally:
|
||||
# Clean up.
|
||||
CHROOT_FILES.pop(0)
|
||||
os.unlink(temporary_file)
|
||||
# Test that unreadable chroot files don't break coloredlogs.
|
||||
try:
|
||||
CHROOT_FILES.insert(0, temporary_file)
|
||||
# Make sure that a usable value is still produced.
|
||||
assert find_hostname()
|
||||
finally:
|
||||
# Clean up.
|
||||
CHROOT_FILES.pop(0)
|
||||
|
||||
def test_host_name_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.HostNameFilter()`."""
|
||||
install(fmt='%(hostname)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_hostname() in output
|
||||
|
||||
def test_program_name_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.ProgramNameFilter()`."""
|
||||
install(fmt='%(programname)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_program_name() in output
|
||||
|
||||
def test_username_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.UserNameFilter()`."""
|
||||
install(fmt='%(username)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_username() in output
|
||||
|
||||
def test_system_logging(self):
|
||||
"""Make sure the :class:`coloredlogs.syslog.SystemLogging` context manager works."""
|
||||
system_log_file = self.find_system_log()
|
||||
expected_message = random_string(50)
|
||||
with SystemLogging(programname='coloredlogs-test-suite') as syslog:
|
||||
if not syslog:
|
||||
return self.skipTest("couldn't connect to syslog daemon")
|
||||
# When I tried out the system logging support on macOS 10.13.1 on
|
||||
# 2018-01-05 I found that while WARNING and ERROR messages show up
|
||||
# in the system log DEBUG and INFO messages don't. This explains
|
||||
# the importance of the level of the log message below.
|
||||
logging.error("%s", expected_message)
|
||||
# Retry the following assertion (for up to 60 seconds) to give the
|
||||
# logging daemon time to write our log message to disk. This
|
||||
# appears to be needed on MacOS workers on Travis CI, see:
|
||||
# https://travis-ci.org/xolox/python-coloredlogs/jobs/325245853
|
||||
retry(lambda: check_contents(system_log_file, expected_message, True))
|
||||
|
||||
def test_system_logging_override(self):
|
||||
"""Make sure the :class:`coloredlogs.syslog.is_syslog_supported` respects the override."""
|
||||
with PatchedItem(os.environ, 'COLOREDLOGS_SYSLOG', 'true'):
|
||||
assert is_syslog_supported() is True
|
||||
with PatchedItem(os.environ, 'COLOREDLOGS_SYSLOG', 'false'):
|
||||
assert is_syslog_supported() is False
|
||||
|
||||
def test_syslog_shortcut_simple(self):
|
||||
"""Make sure that ``coloredlogs.install(syslog=True)`` works."""
|
||||
system_log_file = self.find_system_log()
|
||||
expected_message = random_string(50)
|
||||
with cleanup_handlers():
|
||||
# See test_system_logging() for the importance of this log level.
|
||||
coloredlogs.install(syslog=True)
|
||||
logging.error("%s", expected_message)
|
||||
# See the comments in test_system_logging() on why this is retried.
|
||||
retry(lambda: check_contents(system_log_file, expected_message, True))
|
||||
|
||||
def test_syslog_shortcut_enhanced(self):
|
||||
"""Make sure that ``coloredlogs.install(syslog='warning')`` works."""
|
||||
system_log_file = self.find_system_log()
|
||||
the_expected_message = random_string(50)
|
||||
not_an_expected_message = random_string(50)
|
||||
with cleanup_handlers():
|
||||
# See test_system_logging() for the importance of these log levels.
|
||||
coloredlogs.install(syslog='error')
|
||||
logging.warning("%s", not_an_expected_message)
|
||||
logging.error("%s", the_expected_message)
|
||||
# See the comments in test_system_logging() on why this is retried.
|
||||
retry(lambda: check_contents(system_log_file, the_expected_message, True))
|
||||
retry(lambda: check_contents(system_log_file, not_an_expected_message, False))
|
||||
|
||||
def test_name_normalization(self):
|
||||
"""Make sure :class:`~coloredlogs.NameNormalizer` works as intended."""
|
||||
nn = NameNormalizer()
|
||||
for canonical_name in ['debug', 'info', 'warning', 'error', 'critical']:
|
||||
assert nn.normalize_name(canonical_name) == canonical_name
|
||||
assert nn.normalize_name(canonical_name.upper()) == canonical_name
|
||||
assert nn.normalize_name('warn') == 'warning'
|
||||
assert nn.normalize_name('fatal') == 'critical'
|
||||
|
||||
def test_style_parsing(self):
|
||||
"""Make sure :func:`~coloredlogs.parse_encoded_styles()` works as intended."""
|
||||
encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold'
|
||||
decoded_styles = parse_encoded_styles(encoded_styles, normalize_key=lambda k: k.upper())
|
||||
assert sorted(decoded_styles.keys()) == sorted(['debug', 'warning', 'error', 'critical'])
|
||||
assert decoded_styles['debug']['color'] == 'green'
|
||||
assert decoded_styles['warning']['color'] == 'yellow'
|
||||
assert decoded_styles['error']['color'] == 'red'
|
||||
assert decoded_styles['critical']['color'] == 'red'
|
||||
assert decoded_styles['critical']['bold'] is True
|
||||
|
||||
def test_is_verbose(self):
|
||||
"""Make sure is_verbose() does what it should :-)."""
|
||||
set_level(logging.INFO)
|
||||
assert not is_verbose()
|
||||
set_level(logging.DEBUG)
|
||||
assert is_verbose()
|
||||
set_level(logging.VERBOSE)
|
||||
assert is_verbose()
|
||||
|
||||
def test_increase_verbosity(self):
|
||||
"""Make sure increase_verbosity() respects default and custom levels."""
|
||||
# Start from a known state.
|
||||
set_level(logging.INFO)
|
||||
assert get_level() == logging.INFO
|
||||
# INFO -> VERBOSE.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.VERBOSE
|
||||
# VERBOSE -> DEBUG.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.DEBUG
|
||||
# DEBUG -> SPAM.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.SPAM
|
||||
# SPAM -> NOTSET.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.NOTSET
|
||||
# NOTSET -> NOTSET.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.NOTSET
|
||||
|
||||
def test_decrease_verbosity(self):
|
||||
"""Make sure decrease_verbosity() respects default and custom levels."""
|
||||
# Start from a known state.
|
||||
set_level(logging.INFO)
|
||||
assert get_level() == logging.INFO
|
||||
# INFO -> NOTICE.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.NOTICE
|
||||
# NOTICE -> WARNING.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.WARNING
|
||||
# WARNING -> SUCCESS.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.SUCCESS
|
||||
# SUCCESS -> ERROR.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.ERROR
|
||||
# ERROR -> CRITICAL.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.CRITICAL
|
||||
# CRITICAL -> CRITICAL.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.CRITICAL
|
||||
|
||||
def test_level_discovery(self):
|
||||
"""Make sure find_defined_levels() always reports the levels defined in Python's standard library."""
|
||||
defined_levels = find_defined_levels()
|
||||
level_values = defined_levels.values()
|
||||
for number in (0, 10, 20, 30, 40, 50):
|
||||
assert number in level_values
|
||||
|
||||
def test_walk_propagation_tree(self):
|
||||
"""Make sure walk_propagation_tree() properly walks the tree of loggers."""
|
||||
root, parent, child, grand_child = self.get_logger_tree()
|
||||
# Check the default mode of operation.
|
||||
loggers = list(walk_propagation_tree(grand_child))
|
||||
assert loggers == [grand_child, child, parent, root]
|
||||
# Now change the propagation (non-default mode of operation).
|
||||
child.propagate = False
|
||||
loggers = list(walk_propagation_tree(grand_child))
|
||||
assert loggers == [grand_child, child]
|
||||
|
||||
def test_find_handler(self):
|
||||
"""Make sure find_handler() works as intended."""
|
||||
root, parent, child, grand_child = self.get_logger_tree()
|
||||
# Add some handlers to the tree.
|
||||
stream_handler = logging.StreamHandler()
|
||||
syslog_handler = logging.handlers.SysLogHandler()
|
||||
child.addHandler(stream_handler)
|
||||
parent.addHandler(syslog_handler)
|
||||
# Make sure the first matching handler is returned.
|
||||
matched_handler, matched_logger = find_handler(grand_child, lambda h: isinstance(h, logging.Handler))
|
||||
assert matched_handler is stream_handler
|
||||
# Make sure the first matching handler of the given type is returned.
|
||||
matched_handler, matched_logger = find_handler(child, lambda h: isinstance(h, logging.handlers.SysLogHandler))
|
||||
assert matched_handler is syslog_handler
|
||||
|
||||
def get_logger_tree(self):
|
||||
"""Create and return a tree of loggers."""
|
||||
# Get the root logger.
|
||||
root = logging.getLogger()
|
||||
# Create a top level logger for ourselves.
|
||||
parent_name = random_string()
|
||||
parent = logging.getLogger(parent_name)
|
||||
# Create a child logger.
|
||||
child_name = '%s.%s' % (parent_name, random_string())
|
||||
child = logging.getLogger(child_name)
|
||||
# Create a grand child logger.
|
||||
grand_child_name = '%s.%s' % (child_name, random_string())
|
||||
grand_child = logging.getLogger(grand_child_name)
|
||||
return root, parent, child, grand_child
|
||||
|
||||
def test_support_for_milliseconds(self):
|
||||
"""Make sure milliseconds are hidden by default but can be easily enabled."""
|
||||
# Check that the default log format doesn't include milliseconds.
|
||||
stream = StringIO()
|
||||
install(reconfigure=True, stream=stream)
|
||||
logging.info("This should not include milliseconds.")
|
||||
assert all(map(PLAIN_TEXT_PATTERN.match, stream.getvalue().splitlines()))
|
||||
# Check that milliseconds can be enabled via a shortcut.
|
||||
stream = StringIO()
|
||||
install(milliseconds=True, reconfigure=True, stream=stream)
|
||||
logging.info("This should include milliseconds.")
|
||||
assert all(map(PATTERN_INCLUDING_MILLISECONDS.match, stream.getvalue().splitlines()))
|
||||
|
||||
def test_support_for_milliseconds_directive(self):
|
||||
"""Make sure milliseconds using the ``%f`` directive are supported."""
|
||||
stream = StringIO()
|
||||
install(reconfigure=True, stream=stream, datefmt='%Y-%m-%dT%H:%M:%S.%f%z')
|
||||
logging.info("This should be timestamped according to #45.")
|
||||
assert re.match(r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}[+-]\d{4}\s', stream.getvalue())
|
||||
|
||||
def test_plain_text_output_format(self):
|
||||
"""Inspect the plain text output of coloredlogs."""
|
||||
logger = VerboseLogger(random_string(25))
|
||||
stream = StringIO()
|
||||
install(level=logging.NOTSET, logger=logger, stream=stream)
|
||||
# Test that filtering on severity works.
|
||||
logger.setLevel(logging.INFO)
|
||||
logger.debug("No one should see this message.")
|
||||
assert len(stream.getvalue().strip()) == 0
|
||||
# Test that the default output format looks okay in plain text.
|
||||
logger.setLevel(logging.NOTSET)
|
||||
for method, severity in ((logger.debug, 'DEBUG'),
|
||||
(logger.info, 'INFO'),
|
||||
(logger.verbose, 'VERBOSE'),
|
||||
(logger.warning, 'WARNING'),
|
||||
(logger.error, 'ERROR'),
|
||||
(logger.critical, 'CRITICAL')):
|
||||
# XXX Workaround for a regression in Python 3.7 caused by the
|
||||
# Logger.isEnabledFor() method using stale cache entries. If we
|
||||
# don't clear the cache then logger.isEnabledFor(logging.DEBUG)
|
||||
# returns False and no DEBUG message is emitted.
|
||||
try:
|
||||
logger._cache.clear()
|
||||
except AttributeError:
|
||||
pass
|
||||
# Prepare the text.
|
||||
text = "This is a message with severity %r." % severity.lower()
|
||||
# Log the message with the given severity.
|
||||
method(text)
|
||||
# Get the line of output generated by the handler.
|
||||
output = stream.getvalue()
|
||||
lines = output.splitlines()
|
||||
last_line = lines[-1]
|
||||
assert text in last_line
|
||||
assert severity in last_line
|
||||
assert PLAIN_TEXT_PATTERN.match(last_line)
|
||||
|
||||
def test_dynamic_stderr_lookup(self):
|
||||
"""Make sure coloredlogs.install() uses StandardErrorHandler when possible."""
|
||||
coloredlogs.install()
|
||||
# Redirect sys.stderr to a temporary buffer.
|
||||
initial_stream = StringIO()
|
||||
initial_text = "Which stream will receive this text?"
|
||||
with PatchedAttribute(sys, 'stderr', initial_stream):
|
||||
logging.info(initial_text)
|
||||
assert initial_text in initial_stream.getvalue()
|
||||
# Redirect sys.stderr again, to a different destination.
|
||||
subsequent_stream = StringIO()
|
||||
subsequent_text = "And which stream will receive this other text?"
|
||||
with PatchedAttribute(sys, 'stderr', subsequent_stream):
|
||||
logging.info(subsequent_text)
|
||||
assert subsequent_text in subsequent_stream.getvalue()
|
||||
|
||||
def test_force_enable(self):
|
||||
"""Make sure ANSI escape sequences can be forced (bypassing auto-detection)."""
|
||||
interpreter = subprocess.Popen([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install(isatty=True)",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
], stderr=subprocess.PIPE)
|
||||
stdout, stderr = interpreter.communicate()
|
||||
assert ANSI_CSI in stderr.decode('UTF-8')
|
||||
|
||||
def test_auto_disable(self):
|
||||
"""
|
||||
Make sure ANSI escape sequences are not emitted when logging output is being redirected.
|
||||
|
||||
This is a regression test for https://github.com/xolox/python-coloredlogs/issues/100.
|
||||
|
||||
It works as follows:
|
||||
|
||||
1. We mock an interactive terminal using 'capturer' to ensure that this
|
||||
test works inside test drivers that capture output (like pytest).
|
||||
|
||||
2. We launch a subprocess (to ensure a clean process state) where
|
||||
stderr is captured but stdout is not, emulating issue #100.
|
||||
|
||||
3. The output captured on stderr contained ANSI escape sequences after
|
||||
this test was written and before the issue was fixed, so now this
|
||||
serves as a regression test for issue #100.
|
||||
"""
|
||||
with CaptureOutput():
|
||||
interpreter = subprocess.Popen([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install()",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
], stderr=subprocess.PIPE)
|
||||
stdout, stderr = interpreter.communicate()
|
||||
assert ANSI_CSI not in stderr.decode('UTF-8')
|
||||
|
||||
def test_env_disable(self):
|
||||
"""Make sure ANSI escape sequences can be disabled using ``$NO_COLOR``."""
|
||||
with PatchedItem(os.environ, 'NO_COLOR', 'I like monochrome'):
|
||||
with CaptureOutput() as capturer:
|
||||
subprocess.check_call([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install()",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
])
|
||||
output = capturer.get_text()
|
||||
assert ANSI_CSI not in output
|
||||
|
||||
def test_html_conversion(self):
|
||||
"""Check the conversion from ANSI escape sequences to HTML."""
|
||||
# Check conversion of colored text.
|
||||
for color_name, ansi_code in ANSI_COLOR_CODES.items():
|
||||
ansi_encoded_text = 'plain text followed by %s text' % ansi_wrap(color_name, color=color_name)
|
||||
expected_html = format(
|
||||
'<code>plain text followed by <span style="color:{css}">{name}</span> text</code>',
|
||||
css=EIGHT_COLOR_PALETTE[ansi_code], name=color_name,
|
||||
)
|
||||
self.assertEqual(expected_html, convert(ansi_encoded_text))
|
||||
# Check conversion of bright colored text.
|
||||
expected_html = '<code><span style="color:#FF0">bright yellow</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bright yellow', color='yellow', bright=True)))
|
||||
# Check conversion of text with a background color.
|
||||
expected_html = '<code><span style="background-color:#DE382B">red background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('red background', background='red')))
|
||||
# Check conversion of text with a bright background color.
|
||||
expected_html = '<code><span style="background-color:#F00">bright red background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bright red background', background='red', bright=True)))
|
||||
# Check conversion of text that uses the 256 color mode palette as a foreground color.
|
||||
expected_html = '<code><span style="color:#FFAF00">256 color mode foreground</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('256 color mode foreground', color=214)))
|
||||
# Check conversion of text that uses the 256 color mode palette as a background color.
|
||||
expected_html = '<code><span style="background-color:#AF0000">256 color mode background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('256 color mode background', background=124)))
|
||||
# Check that invalid 256 color mode indexes don't raise exceptions.
|
||||
expected_html = '<code>plain text expected</code>'
|
||||
self.assertEqual(expected_html, convert('\x1b[38;5;256mplain text expected\x1b[0m'))
|
||||
# Check conversion of bold text.
|
||||
expected_html = '<code><span style="font-weight:bold">bold text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bold text', bold=True)))
|
||||
# Check conversion of underlined text.
|
||||
expected_html = '<code><span style="text-decoration:underline">underlined text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('underlined text', underline=True)))
|
||||
# Check conversion of strike-through text.
|
||||
expected_html = '<code><span style="text-decoration:line-through">strike-through text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('strike-through text', strike_through=True)))
|
||||
# Check conversion of inverse text.
|
||||
expected_html = '<code><span style="background-color:#FFC706;color:#000">inverse</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('inverse', color='yellow', inverse=True)))
|
||||
# Check conversion of URLs.
|
||||
for sample_text in 'www.python.org', 'http://coloredlogs.rtfd.org', 'https://coloredlogs.rtfd.org':
|
||||
sample_url = sample_text if '://' in sample_text else ('http://' + sample_text)
|
||||
expected_html = '<code><a href="%s" style="color:inherit">%s</a></code>' % (sample_url, sample_text)
|
||||
self.assertEqual(expected_html, convert(sample_text))
|
||||
# Check that the capture pattern for URLs doesn't match ANSI escape
|
||||
# sequences and also check that the short hand for the 0 reset code is
|
||||
# supported. These are tests for regressions of bugs found in
|
||||
# coloredlogs <= 8.0.
|
||||
reset_short_hand = '\x1b[0m'
|
||||
blue_underlined = ansi_style(color='blue', underline=True)
|
||||
ansi_encoded_text = '<%shttps://coloredlogs.readthedocs.io%s>' % (blue_underlined, reset_short_hand)
|
||||
expected_html = (
|
||||
'<code><<span style="color:#006FB8;text-decoration:underline">'
|
||||
'<a href="https://coloredlogs.readthedocs.io" style="color:inherit">'
|
||||
'https://coloredlogs.readthedocs.io'
|
||||
'</a></span>></code>'
|
||||
)
|
||||
self.assertEqual(expected_html, convert(ansi_encoded_text))
|
||||
|
||||
def test_output_interception(self):
|
||||
"""Test capturing of output from external commands."""
|
||||
expected_output = 'testing, 1, 2, 3 ..'
|
||||
actual_output = capture(['echo', expected_output])
|
||||
assert actual_output.strip() == expected_output.strip()
|
||||
|
||||
def test_enable_colored_cron_mailer(self):
|
||||
"""Test that automatic ANSI to HTML conversion when running under ``cron`` can be enabled."""
|
||||
with PatchedItem(os.environ, 'CONTENT_TYPE', 'text/html'):
|
||||
with ColoredCronMailer() as mailer:
|
||||
assert mailer.is_enabled
|
||||
|
||||
def test_disable_colored_cron_mailer(self):
|
||||
"""Test that automatic ANSI to HTML conversion when running under ``cron`` can be disabled."""
|
||||
with PatchedItem(os.environ, 'CONTENT_TYPE', 'text/plain'):
|
||||
with ColoredCronMailer() as mailer:
|
||||
assert not mailer.is_enabled
|
||||
|
||||
def test_auto_install(self):
|
||||
"""Test :func:`coloredlogs.auto_install()`."""
|
||||
needle = random_string()
|
||||
command_line = [sys.executable, '-c', 'import logging; logging.info(%r)' % needle]
|
||||
# Sanity check that log messages aren't enabled by default.
|
||||
with CaptureOutput() as capturer:
|
||||
os.environ['COLOREDLOGS_AUTO_INSTALL'] = 'false'
|
||||
subprocess.check_call(command_line)
|
||||
output = capturer.get_text()
|
||||
assert needle not in output
|
||||
# Test that the $COLOREDLOGS_AUTO_INSTALL environment variable can be
|
||||
# used to automatically call coloredlogs.install() during initialization.
|
||||
with CaptureOutput() as capturer:
|
||||
os.environ['COLOREDLOGS_AUTO_INSTALL'] = 'true'
|
||||
subprocess.check_call(command_line)
|
||||
output = capturer.get_text()
|
||||
assert needle in output
|
||||
|
||||
def test_cli_demo(self):
|
||||
"""Test the command line colored logging demonstration."""
|
||||
with CaptureOutput() as capturer:
|
||||
main('coloredlogs', '--demo')
|
||||
output = capturer.get_text()
|
||||
# Make sure the output contains all of the expected logging level names.
|
||||
for name in 'debug', 'info', 'warning', 'error', 'critical':
|
||||
assert name.upper() in output
|
||||
|
||||
def test_cli_conversion(self):
|
||||
"""Test the command line HTML conversion."""
|
||||
output = main('coloredlogs', '--convert', 'coloredlogs', '--demo', capture=True)
|
||||
# Make sure the output is encoded as HTML.
|
||||
assert '<span' in output
|
||||
|
||||
def test_empty_conversion(self):
|
||||
"""
|
||||
Test that conversion of empty output produces no HTML.
|
||||
|
||||
This test was added because I found that ``coloredlogs --convert`` when
|
||||
used in a cron job could cause cron to send out what appeared to be
|
||||
empty emails. On more careful inspection the body of those emails was
|
||||
``<code></code>``. By not emitting the wrapper element when no other
|
||||
HTML is generated, cron will not send out an email.
|
||||
"""
|
||||
output = main('coloredlogs', '--convert', 'true', capture=True)
|
||||
assert not output.strip()
|
||||
|
||||
def test_implicit_usage_message(self):
|
||||
"""Test that the usage message is shown when no actions are given."""
|
||||
assert 'Usage:' in main('coloredlogs', capture=True)
|
||||
|
||||
def test_explicit_usage_message(self):
|
||||
"""Test that the usage message is shown when ``--help`` is given."""
|
||||
assert 'Usage:' in main('coloredlogs', '--help', capture=True)
|
||||
|
||||
def test_custom_record_factory(self):
|
||||
"""
|
||||
Test that custom LogRecord factories are supported.
|
||||
|
||||
This test is a bit convoluted because the logging module suppresses
|
||||
exceptions. We monkey patch the method suspected of encountering
|
||||
exceptions so that we can tell after it was called whether any
|
||||
exceptions occurred (despite the exceptions not propagating).
|
||||
"""
|
||||
if not hasattr(logging, 'getLogRecordFactory'):
|
||||
return self.skipTest("this test requires Python >= 3.2")
|
||||
|
||||
exceptions = []
|
||||
original_method = ColoredFormatter.format
|
||||
original_factory = logging.getLogRecordFactory()
|
||||
|
||||
def custom_factory(*args, **kwargs):
|
||||
record = original_factory(*args, **kwargs)
|
||||
record.custom_attribute = 0xdecafbad
|
||||
return record
|
||||
|
||||
def custom_method(*args, **kw):
|
||||
try:
|
||||
return original_method(*args, **kw)
|
||||
except Exception as e:
|
||||
exceptions.append(e)
|
||||
raise
|
||||
|
||||
with PatchedAttribute(ColoredFormatter, 'format', custom_method):
|
||||
logging.setLogRecordFactory(custom_factory)
|
||||
try:
|
||||
demonstrate_colored_logging()
|
||||
finally:
|
||||
logging.setLogRecordFactory(original_factory)
|
||||
|
||||
# Ensure that no exceptions were triggered.
|
||||
assert not exceptions
|
||||
|
||||
|
||||
def check_contents(filename, contents, match):
|
||||
"""Check if a line in a file contains an expected string."""
|
||||
with open(filename) as handle:
|
||||
assert any(contents in line for line in handle) == match
|
||||
|
||||
|
||||
def main(*arguments, **options):
|
||||
"""Wrap the command line interface to make it easier to test."""
|
||||
capture = options.get('capture', False)
|
||||
saved_argv = sys.argv
|
||||
saved_stdout = sys.stdout
|
||||
try:
|
||||
sys.argv = arguments
|
||||
if capture:
|
||||
sys.stdout = StringIO()
|
||||
coloredlogs.cli.main()
|
||||
if capture:
|
||||
return sys.stdout.getvalue()
|
||||
finally:
|
||||
sys.argv = saved_argv
|
||||
sys.stdout = saved_stdout
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def cleanup_handlers():
|
||||
"""Context manager to cleanup output handlers."""
|
||||
# There's nothing to set up so we immediately yield control.
|
||||
yield
|
||||
# After the with block ends we cleanup any output handlers.
|
||||
for match_func in match_stream_handler, match_syslog_handler:
|
||||
handler, logger = find_handler(logging.getLogger(), match_func)
|
||||
if handler and logger:
|
||||
logger.removeHandler(handler)
|
||||
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,27 @@
|
||||
Metadata-Version: 2.4
|
||||
Name: flatbuffers
|
||||
Version: 25.12.19
|
||||
Summary: The FlatBuffers serialization format for Python
|
||||
Home-page: https://google.github.io/flatbuffers/
|
||||
Author: Derek Bailey
|
||||
Author-email: derekbailey@google.com
|
||||
License: Apache 2.0
|
||||
Project-URL: Documentation, https://google.github.io/flatbuffers/
|
||||
Project-URL: Source, https://github.com/google/flatbuffers
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: Apache Software License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Dynamic: author
|
||||
Dynamic: author-email
|
||||
Dynamic: classifier
|
||||
Dynamic: description
|
||||
Dynamic: home-page
|
||||
Dynamic: license
|
||||
Dynamic: project-url
|
||||
Dynamic: summary
|
||||
|
||||
Python runtime library for use with the `Flatbuffers <https://google.github.io/flatbuffers/>`_ serialization format.
|
||||
@@ -0,0 +1,25 @@
|
||||
flatbuffers-25.12.19.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
flatbuffers-25.12.19.dist-info/METADATA,sha256=nzgjn5b6ohUipqratpEuHlYHCLvpyfaWoGeclAP6CJ4,1004
|
||||
flatbuffers-25.12.19.dist-info/RECORD,,
|
||||
flatbuffers-25.12.19.dist-info/WHEEL,sha256=JNWh1Fm1UdwIQV075glCn4MVuCRs0sotJIq-J6rbxCU,109
|
||||
flatbuffers-25.12.19.dist-info/top_level.txt,sha256=UXVWLA8ys6HeqTz6rfKesocUq6ln-ZL8mhZC_cq5BEc,12
|
||||
flatbuffers/__init__.py,sha256=vJZrqZOOTKdBNMa_iTKUA6WJG_c_NzKGpFXOe1Igtiw,751
|
||||
flatbuffers/__pycache__/__init__.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/_version.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/builder.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/compat.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/encode.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/flexbuffers.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/number_types.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/packer.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/table.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/util.cpython-312.pyc,,
|
||||
flatbuffers/_version.py,sha256=sk31rbYWDseoJxI9YQ1fYR-MMxRATyqsntKAEv7D7pI,696
|
||||
flatbuffers/builder.py,sha256=HrG5KJ9rasiSTrMGeatkSdDs7fXV5fy_927Dsgakp4A,24567
|
||||
flatbuffers/compat.py,sha256=ihBSpWDCSL-vgLSyZtcu8LX3ZI3wz9LhtqItY2GQZgg,2373
|
||||
flatbuffers/encode.py,sha256=2Or3mgWRAkJiWg-GgYasDU4zIHpQU3W06fmIhwbz5uM,1550
|
||||
flatbuffers/flexbuffers.py,sha256=yF8Wr4Lo8WJb-pj9NNaIYxLwzlHHyTroM0iO8fyDwbU,44454
|
||||
flatbuffers/number_types.py,sha256=ijO0QcJiuxlQegoBOed0v9m0DdzTZHWxpTBZUqzsWHA,3762
|
||||
flatbuffers/packer.py,sha256=LNWym8YgFRqHjcPeGpYY3inCGWH6XnbkQKtAPtFEVas,1164
|
||||
flatbuffers/table.py,sha256=ciYTmq_CzAuYpb3KAVnl75M84ieChfbyKne-dFHzwwU,4818
|
||||
flatbuffers/util.py,sha256=mRVQ1VoHp0MJMNtRTUGVzALwN4T_C-U14tMbj99py2A,1608
|
||||
@@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: setuptools (80.9.0)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
flatbuffers
|
||||
@@ -0,0 +1,19 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import util
|
||||
from ._version import __version__
|
||||
from .builder import Builder
|
||||
from .compat import range_func as compat_range
|
||||
from .table import Table
|
||||
@@ -0,0 +1,17 @@
|
||||
# Copyright 2019 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Placeholder, to be updated during the release process
|
||||
# by the setup.py
|
||||
__version__ = "25.12.19"
|
||||
@@ -0,0 +1,858 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import warnings
|
||||
|
||||
from . import compat
|
||||
from . import encode
|
||||
from . import number_types as N
|
||||
from . import packer
|
||||
from .compat import memoryview_type
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
from .compat import range_func
|
||||
from .number_types import (SOffsetTFlags, UOffsetTFlags, VOffsetTFlags)
|
||||
|
||||
np = import_numpy()
|
||||
## @file
|
||||
## @addtogroup flatbuffers_python_api
|
||||
## @{
|
||||
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
class OffsetArithmeticError(RuntimeError):
|
||||
"""Error caused by an Offset arithmetic error.
|
||||
|
||||
Probably caused by bad writing of fields. This is considered an unreachable
|
||||
situation in normal circumstances.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IsNotNestedError(RuntimeError):
|
||||
"""Error caused by using a Builder to write Object data when not inside
|
||||
|
||||
an Object.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IsNestedError(RuntimeError):
|
||||
"""Error caused by using a Builder to begin an Object when an Object is
|
||||
|
||||
already being built.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class StructIsNotInlineError(RuntimeError):
|
||||
"""Error caused by using a Builder to write a Struct at a location that
|
||||
|
||||
is not the current Offset.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class BuilderSizeError(RuntimeError):
|
||||
"""Error caused by causing a Builder to exceed the hardcoded limit of 2
|
||||
|
||||
gigabytes.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class BuilderNotFinishedError(RuntimeError):
|
||||
"""Error caused by not calling `Finish` before calling `Output`."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class EndVectorLengthMismatched(RuntimeError):
|
||||
"""The number of elements passed to EndVector does not match the number
|
||||
|
||||
specified in StartVector.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
# VtableMetadataFields is the count of metadata fields in each vtable.
|
||||
VtableMetadataFields = 2
|
||||
## @endcond
|
||||
|
||||
|
||||
class Builder(object):
|
||||
"""A Builder is used to construct one or more FlatBuffers.
|
||||
|
||||
Typically, Builder objects will be used from code generated by the `flatc`
|
||||
compiler.
|
||||
|
||||
A Builder constructs byte buffers in a last-first manner for simplicity and
|
||||
performance during reading.
|
||||
|
||||
Internally, a Builder is a state machine for creating FlatBuffer objects.
|
||||
|
||||
It holds the following internal state:
|
||||
- Bytes: an array of bytes.
|
||||
- current_vtable: a list of integers.
|
||||
- vtables: a hash of vtable entries.
|
||||
|
||||
Attributes:
|
||||
Bytes: The internal `bytearray` for the Builder.
|
||||
finished: A boolean determining if the Builder has been finalized.
|
||||
"""
|
||||
|
||||
## @cond FLATBUFFERS_INTENRAL
|
||||
__slots__ = (
|
||||
"Bytes",
|
||||
"current_vtable",
|
||||
"head",
|
||||
"minalign",
|
||||
"objectEnd",
|
||||
"vtables",
|
||||
"nested",
|
||||
"forceDefaults",
|
||||
"finished",
|
||||
"vectorNumElems",
|
||||
"sharedStrings",
|
||||
)
|
||||
|
||||
"""Maximum buffer size constant, in bytes.
|
||||
|
||||
Builder will never allow it's buffer grow over this size.
|
||||
Currently equals 2Gb.
|
||||
"""
|
||||
MAX_BUFFER_SIZE = 2**31
|
||||
## @endcond
|
||||
|
||||
def __init__(self, initialSize=1024):
|
||||
"""Initializes a Builder of size `initial_size`.
|
||||
|
||||
The internal buffer is grown as needed.
|
||||
"""
|
||||
|
||||
if not (0 <= initialSize <= Builder.MAX_BUFFER_SIZE):
|
||||
msg = "flatbuffers: Cannot create Builder larger than 2 gigabytes."
|
||||
raise BuilderSizeError(msg)
|
||||
|
||||
self.Bytes = bytearray(initialSize)
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.current_vtable = None
|
||||
self.head = UOffsetTFlags.py_type(initialSize)
|
||||
self.minalign = 1
|
||||
self.objectEnd = None
|
||||
self.vtables = {}
|
||||
self.nested = False
|
||||
self.forceDefaults = False
|
||||
self.sharedStrings = None
|
||||
## @endcond
|
||||
self.finished = False
|
||||
|
||||
def Clear(self):
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.current_vtable = None
|
||||
self.head = len(self.Bytes)
|
||||
self.minalign = 1
|
||||
self.objectEnd = None
|
||||
self.vtables = {}
|
||||
self.nested = False
|
||||
self.forceDefaults = False
|
||||
self.sharedStrings = None
|
||||
self.vectorNumElems = None
|
||||
## @endcond
|
||||
self.finished = False
|
||||
|
||||
def Output(self):
|
||||
"""Return the portion of the buffer that has been used for writing data.
|
||||
|
||||
This is the typical way to access the FlatBuffer data inside the
|
||||
builder. If you try to access `Builder.Bytes` directly, you would need
|
||||
to manually index it with `Head()`, since the buffer is constructed
|
||||
backwards.
|
||||
|
||||
It raises BuilderNotFinishedError if the buffer has not been finished
|
||||
with `Finish`.
|
||||
"""
|
||||
|
||||
if not self.finished:
|
||||
raise BuilderNotFinishedError()
|
||||
|
||||
return self.Bytes[self.head :]
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def StartObject(self, numfields):
|
||||
"""StartObject initializes bookkeeping for writing a new object."""
|
||||
|
||||
self.assertNotNested()
|
||||
|
||||
# use 32-bit offsets so that arithmetic doesn't overflow.
|
||||
self.current_vtable = [0] * numfields
|
||||
self.objectEnd = self.Offset()
|
||||
self.nested = True
|
||||
|
||||
def WriteVtable(self):
|
||||
"""WriteVtable serializes the vtable for the current object, if needed.
|
||||
|
||||
Before writing out the vtable, this checks pre-existing vtables for
|
||||
equality to this one. If an equal vtable is found, point the object to
|
||||
the existing vtable and return.
|
||||
|
||||
Because vtable values are sensitive to alignment of object data, not
|
||||
all logically-equal vtables will be deduplicated.
|
||||
|
||||
A vtable has the following format:
|
||||
<VOffsetT: size of the vtable in bytes, including this value>
|
||||
<VOffsetT: size of the object in bytes, including the vtable offset>
|
||||
<VOffsetT: offset for a field> * N, where N is the number of fields
|
||||
in the schema for this type. Includes deprecated fields.
|
||||
Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide.
|
||||
|
||||
An object has the following format:
|
||||
<SOffsetT: offset to this object's vtable (may be negative)>
|
||||
<byte: data>+
|
||||
"""
|
||||
|
||||
# Prepend a zero scalar to the object. Later in this function we'll
|
||||
# write an offset here that points to the object's vtable:
|
||||
self.PrependSOffsetTRelative(0)
|
||||
|
||||
objectOffset = self.Offset()
|
||||
|
||||
vtKey = []
|
||||
trim = True
|
||||
for elem in reversed(self.current_vtable):
|
||||
if elem == 0:
|
||||
if trim:
|
||||
continue
|
||||
else:
|
||||
elem = objectOffset - elem
|
||||
trim = False
|
||||
|
||||
vtKey.append(elem)
|
||||
|
||||
objectSize = UOffsetTFlags.py_type(objectOffset - self.objectEnd)
|
||||
vtKey.append(objectSize)
|
||||
vtKey = tuple(vtKey)
|
||||
# calculate the size of the object
|
||||
vt2Offset = self.vtables.get(vtKey)
|
||||
if vt2Offset is None:
|
||||
# Did not find a vtable, so write this one to the buffer.
|
||||
|
||||
# Write out the current vtable in reverse , because
|
||||
# serialization occurs in last-first order:
|
||||
i = len(self.current_vtable) - 1
|
||||
trailing = 0
|
||||
trim = True
|
||||
while i >= 0:
|
||||
off = 0
|
||||
elem = self.current_vtable[i]
|
||||
i -= 1
|
||||
|
||||
if elem == 0:
|
||||
if trim:
|
||||
trailing += 1
|
||||
continue
|
||||
else:
|
||||
# Forward reference to field;
|
||||
# use 32bit number to ensure no overflow:
|
||||
off = objectOffset - elem
|
||||
trim = False
|
||||
|
||||
self.PrependVOffsetT(off)
|
||||
|
||||
# The two metadata fields are written last.
|
||||
|
||||
# First, store the object bytesize:
|
||||
self.PrependVOffsetT(VOffsetTFlags.py_type(objectSize))
|
||||
|
||||
# Second, store the vtable bytesize:
|
||||
vBytes = len(self.current_vtable) - trailing + VtableMetadataFields
|
||||
vBytes *= N.VOffsetTFlags.bytewidth
|
||||
self.PrependVOffsetT(VOffsetTFlags.py_type(vBytes))
|
||||
|
||||
# Next, write the offset to the new vtable in the
|
||||
# already-allocated SOffsetT at the beginning of this object:
|
||||
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
|
||||
encode.Write(
|
||||
packer.soffset,
|
||||
self.Bytes,
|
||||
objectStart,
|
||||
SOffsetTFlags.py_type(self.Offset() - objectOffset),
|
||||
)
|
||||
|
||||
# Finally, store this vtable in memory for future
|
||||
# deduplication:
|
||||
self.vtables[vtKey] = self.Offset()
|
||||
else:
|
||||
# Found a duplicate vtable.
|
||||
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
|
||||
self.head = UOffsetTFlags.py_type(objectStart)
|
||||
|
||||
# Write the offset to the found vtable in the
|
||||
# already-allocated SOffsetT at the beginning of this object:
|
||||
encode.Write(
|
||||
packer.soffset,
|
||||
self.Bytes,
|
||||
self.Head(),
|
||||
SOffsetTFlags.py_type(vt2Offset - objectOffset),
|
||||
)
|
||||
|
||||
self.current_vtable = None
|
||||
return objectOffset
|
||||
|
||||
def EndObject(self):
|
||||
"""EndObject writes data necessary to finish object construction."""
|
||||
self.assertNested()
|
||||
self.nested = False
|
||||
return self.WriteVtable()
|
||||
|
||||
def GrowByteBuffer(self):
|
||||
"""Doubles the size of the byteslice, and copies the old data towards
|
||||
|
||||
the end of the new buffer (since we build the buffer backwards).
|
||||
"""
|
||||
if len(self.Bytes) == Builder.MAX_BUFFER_SIZE:
|
||||
msg = "flatbuffers: cannot grow buffer beyond 2 gigabytes"
|
||||
raise BuilderSizeError(msg)
|
||||
|
||||
newSize = min(len(self.Bytes) * 2, Builder.MAX_BUFFER_SIZE)
|
||||
if newSize == 0:
|
||||
newSize = 1
|
||||
bytes2 = bytearray(newSize)
|
||||
bytes2[newSize - len(self.Bytes) :] = self.Bytes
|
||||
self.Bytes = bytes2
|
||||
|
||||
## @endcond
|
||||
|
||||
def Head(self):
|
||||
"""Get the start of useful data in the underlying byte buffer.
|
||||
|
||||
Note: unlike other functions, this value is interpreted as from the
|
||||
left.
|
||||
"""
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
return self.head
|
||||
## @endcond
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def Offset(self):
|
||||
"""Offset relative to the end of the buffer."""
|
||||
return len(self.Bytes) - self.head
|
||||
|
||||
def Pad(self, n):
|
||||
"""Pad places zeros at the current offset."""
|
||||
if n <= 0:
|
||||
return
|
||||
new_head = self.head - n
|
||||
self.Bytes[new_head : self.head] = b"\x00" * n
|
||||
self.head = new_head
|
||||
|
||||
def Prep(self, size, additionalBytes):
|
||||
"""Prep prepares to write an element of `size` after `additional_bytes`
|
||||
|
||||
have been written, e.g. if you write a string, you need to align
|
||||
such the int length field is aligned to SizeInt32, and the string
|
||||
data follows it directly.
|
||||
If all you need to do is align, `additionalBytes` will be 0.
|
||||
"""
|
||||
|
||||
# Track the biggest thing we've ever aligned to.
|
||||
if size > self.minalign:
|
||||
self.minalign = size
|
||||
|
||||
# Find the amount of alignment needed such that `size` is properly
|
||||
# aligned after `additionalBytes`:
|
||||
head = self.head
|
||||
buf_len = len(self.Bytes)
|
||||
alignSize = (~(buf_len - head + additionalBytes)) + 1
|
||||
alignSize &= size - 1
|
||||
|
||||
# Reallocate the buffer if needed:
|
||||
needed = alignSize + size + additionalBytes
|
||||
while head < needed:
|
||||
oldBufSize = buf_len
|
||||
self.GrowByteBuffer()
|
||||
buf_len = len(self.Bytes)
|
||||
head += buf_len - oldBufSize
|
||||
self.head = head
|
||||
self.Pad(alignSize)
|
||||
|
||||
def PrependSOffsetTRelative(self, off):
|
||||
"""PrependSOffsetTRelative prepends an SOffsetT, relative to where it
|
||||
|
||||
will be written.
|
||||
"""
|
||||
|
||||
# Ensure alignment is already done:
|
||||
self.Prep(N.SOffsetTFlags.bytewidth, 0)
|
||||
if not (off <= self.Offset()):
|
||||
msg = "flatbuffers: Offset arithmetic error."
|
||||
raise OffsetArithmeticError(msg)
|
||||
off2 = self.Offset() - off + N.SOffsetTFlags.bytewidth
|
||||
self.PlaceSOffsetT(off2)
|
||||
|
||||
## @endcond
|
||||
|
||||
def PrependUOffsetTRelative(self, off):
|
||||
"""Prepends an unsigned offset into vector data, relative to where it
|
||||
|
||||
will be written.
|
||||
"""
|
||||
|
||||
# Ensure alignment is already done:
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, 0)
|
||||
if not (off <= self.Offset()):
|
||||
msg = "flatbuffers: Offset arithmetic error."
|
||||
raise OffsetArithmeticError(msg)
|
||||
off2 = self.Offset() - off + N.UOffsetTFlags.bytewidth
|
||||
self.PlaceUOffsetT(off2)
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def StartVector(self, elemSize, numElems, alignment):
|
||||
"""StartVector initializes bookkeeping for writing a new vector.
|
||||
|
||||
A vector has the following format:
|
||||
- <UOffsetT: number of elements in this vector>
|
||||
- <T: data>+, where T is the type of elements of this vector.
|
||||
"""
|
||||
|
||||
self.assertNotNested()
|
||||
self.nested = True
|
||||
self.vectorNumElems = numElems
|
||||
self.Prep(N.Uint32Flags.bytewidth, elemSize * numElems)
|
||||
self.Prep(alignment, elemSize * numElems) # In case alignment > int.
|
||||
return self.Offset()
|
||||
|
||||
## @endcond
|
||||
|
||||
def EndVector(self, numElems=None):
|
||||
"""EndVector writes data necessary to finish vector construction."""
|
||||
|
||||
self.assertNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = False
|
||||
## @endcond
|
||||
|
||||
if numElems:
|
||||
warnings.warn("numElems is deprecated.", DeprecationWarning, stacklevel=2)
|
||||
if numElems != self.vectorNumElems:
|
||||
raise EndVectorLengthMismatched()
|
||||
|
||||
# we already made space for this, so write without PrependUint32
|
||||
self.PlaceUOffsetT(self.vectorNumElems)
|
||||
self.vectorNumElems = None
|
||||
return self.Offset()
|
||||
|
||||
def CreateSharedString(self, s, encoding="utf-8", errors="strict"):
|
||||
"""CreateSharedString checks if the string is already written to the buffer
|
||||
|
||||
before calling CreateString.
|
||||
"""
|
||||
|
||||
if not self.sharedStrings:
|
||||
self.sharedStrings = {}
|
||||
elif s in self.sharedStrings:
|
||||
return self.sharedStrings[s]
|
||||
|
||||
off = self.CreateString(s, encoding, errors)
|
||||
self.sharedStrings[s] = off
|
||||
|
||||
return off
|
||||
|
||||
def CreateString(self, s, encoding="utf-8", errors="strict"):
|
||||
"""CreateString writes a null-terminated byte string as a vector."""
|
||||
|
||||
self.assertNotNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = True
|
||||
## @endcond
|
||||
|
||||
if isinstance(s, compat.string_types):
|
||||
x = s.encode(encoding, errors)
|
||||
elif isinstance(s, compat.binary_types):
|
||||
x = s
|
||||
else:
|
||||
raise TypeError("non-string passed to CreateString")
|
||||
|
||||
payload_len = len(x)
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, (payload_len + 1) * N.Uint8Flags.bytewidth)
|
||||
self.Place(0, N.Uint8Flags)
|
||||
|
||||
new_head = self.head - payload_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + payload_len] = x
|
||||
|
||||
self.vectorNumElems = payload_len
|
||||
return self.EndVector()
|
||||
|
||||
def CreateByteVector(self, x):
|
||||
"""CreateString writes a byte vector."""
|
||||
|
||||
self.assertNotNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = True
|
||||
## @endcond
|
||||
|
||||
if not isinstance(x, compat.binary_types):
|
||||
raise TypeError("non-byte vector passed to CreateByteVector")
|
||||
|
||||
data_len = len(x)
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, data_len * N.Uint8Flags.bytewidth)
|
||||
new_head = self.head - data_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + data_len] = x
|
||||
|
||||
self.vectorNumElems = data_len
|
||||
return self.EndVector()
|
||||
|
||||
def CreateNumpyVector(self, x):
|
||||
"""CreateNumpyVector writes a numpy array into the buffer."""
|
||||
|
||||
if np is None:
|
||||
# Numpy is required for this feature
|
||||
raise NumpyRequiredForThisFeature("Numpy was not found.")
|
||||
|
||||
if not isinstance(x, np.ndarray):
|
||||
raise TypeError("non-numpy-ndarray passed to CreateNumpyVector")
|
||||
|
||||
if x.dtype.kind not in ["b", "i", "u", "f"]:
|
||||
raise TypeError("numpy-ndarray holds elements of unsupported datatype")
|
||||
|
||||
if x.ndim > 1:
|
||||
raise TypeError("multidimensional-ndarray passed to CreateNumpyVector")
|
||||
|
||||
self.StartVector(x.itemsize, x.size, x.dtype.alignment)
|
||||
|
||||
# Ensure little endian byte ordering
|
||||
if x.dtype.str[0] == "<":
|
||||
x_lend = x
|
||||
else:
|
||||
x_lend = x.byteswap(inplace=False)
|
||||
|
||||
# tobytes ensures c_contiguous ordering
|
||||
payload = x_lend.tobytes(order="C")
|
||||
|
||||
# Calculate total length
|
||||
payload_len = len(payload)
|
||||
new_head = self.head - payload_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + payload_len] = payload
|
||||
|
||||
self.vectorNumElems = x.size
|
||||
return self.EndVector()
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def assertNested(self):
|
||||
"""Check that we are in the process of building an object."""
|
||||
|
||||
if not self.nested:
|
||||
raise IsNotNestedError()
|
||||
|
||||
def assertNotNested(self):
|
||||
"""Check that no other objects are being built while making this object.
|
||||
|
||||
If not, raise an exception.
|
||||
"""
|
||||
|
||||
if self.nested:
|
||||
raise IsNestedError()
|
||||
|
||||
def assertStructIsInline(self, obj):
|
||||
"""Structs are always stored inline, so need to be created right
|
||||
|
||||
where they are used. You'll get this error if you created it
|
||||
elsewhere.
|
||||
"""
|
||||
|
||||
N.enforce_number(obj, N.UOffsetTFlags)
|
||||
if obj != self.Offset():
|
||||
msg = (
|
||||
"flatbuffers: Tried to write a Struct at an Offset that "
|
||||
"is different from the current Offset of the Builder."
|
||||
)
|
||||
raise StructIsNotInlineError(msg)
|
||||
|
||||
def Slot(self, slotnum):
|
||||
"""Slot sets the vtable key `voffset` to the current location in the
|
||||
|
||||
buffer.
|
||||
"""
|
||||
self.assertNested()
|
||||
self.current_vtable[slotnum] = self.Offset()
|
||||
|
||||
## @endcond
|
||||
|
||||
def __Finish(self, rootTable, sizePrefix, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
|
||||
N.enforce_number(rootTable, N.UOffsetTFlags)
|
||||
|
||||
prepSize = N.UOffsetTFlags.bytewidth
|
||||
if file_identifier is not None:
|
||||
prepSize += N.Int32Flags.bytewidth
|
||||
if sizePrefix:
|
||||
prepSize += N.Int32Flags.bytewidth
|
||||
self.Prep(self.minalign, prepSize)
|
||||
|
||||
if file_identifier is not None:
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, encode.FILE_IDENTIFIER_LENGTH)
|
||||
|
||||
# Convert bytes object file_identifier to an array of 4 8-bit integers,
|
||||
# and use big-endian to enforce size compliance.
|
||||
# https://docs.python.org/2/library/struct.html#format-characters
|
||||
file_identifier = N.struct.unpack(">BBBB", file_identifier)
|
||||
for i in range(encode.FILE_IDENTIFIER_LENGTH - 1, -1, -1):
|
||||
# Place the bytes of the file_identifer in reverse order:
|
||||
self.Place(file_identifier[i], N.Uint8Flags)
|
||||
|
||||
self.PrependUOffsetTRelative(rootTable)
|
||||
if sizePrefix:
|
||||
size = len(self.Bytes) - self.head
|
||||
N.enforce_number(size, N.Int32Flags)
|
||||
self.PrependInt32(size)
|
||||
self.finished = True
|
||||
return self.head
|
||||
|
||||
def Finish(self, rootTable, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
|
||||
return self.__Finish(rootTable, False, file_identifier=file_identifier)
|
||||
|
||||
def FinishSizePrefixed(self, rootTable, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`,
|
||||
|
||||
with the size prefixed.
|
||||
"""
|
||||
return self.__Finish(rootTable, True, file_identifier=file_identifier)
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def Prepend(self, flags, off):
|
||||
self.Prep(flags.bytewidth, 0)
|
||||
self.Place(off, flags)
|
||||
|
||||
def PrependSlot(self, flags, o, x, d):
|
||||
if x is not None:
|
||||
N.enforce_number(x, flags)
|
||||
if d is not None:
|
||||
N.enforce_number(d, flags)
|
||||
if x != d or (self.forceDefaults and d is not None):
|
||||
self.Prepend(flags, x)
|
||||
self.Slot(o)
|
||||
|
||||
def PrependBoolSlot(self, *args):
|
||||
self.PrependSlot(N.BoolFlags, *args)
|
||||
|
||||
def PrependByteSlot(self, *args):
|
||||
self.PrependSlot(N.Uint8Flags, *args)
|
||||
|
||||
def PrependUint8Slot(self, *args):
|
||||
self.PrependSlot(N.Uint8Flags, *args)
|
||||
|
||||
def PrependUint16Slot(self, *args):
|
||||
self.PrependSlot(N.Uint16Flags, *args)
|
||||
|
||||
def PrependUint32Slot(self, *args):
|
||||
self.PrependSlot(N.Uint32Flags, *args)
|
||||
|
||||
def PrependUint64Slot(self, *args):
|
||||
self.PrependSlot(N.Uint64Flags, *args)
|
||||
|
||||
def PrependInt8Slot(self, *args):
|
||||
self.PrependSlot(N.Int8Flags, *args)
|
||||
|
||||
def PrependInt16Slot(self, *args):
|
||||
self.PrependSlot(N.Int16Flags, *args)
|
||||
|
||||
def PrependInt32Slot(self, *args):
|
||||
self.PrependSlot(N.Int32Flags, *args)
|
||||
|
||||
def PrependInt64Slot(self, *args):
|
||||
self.PrependSlot(N.Int64Flags, *args)
|
||||
|
||||
def PrependFloat32Slot(self, *args):
|
||||
self.PrependSlot(N.Float32Flags, *args)
|
||||
|
||||
def PrependFloat64Slot(self, *args):
|
||||
self.PrependSlot(N.Float64Flags, *args)
|
||||
|
||||
def PrependUOffsetTRelativeSlot(self, o, x, d):
|
||||
"""PrependUOffsetTRelativeSlot prepends an UOffsetT onto the object at
|
||||
|
||||
vtable slot `o`. If value `x` equals default `d`, then the slot will
|
||||
be set to zero and no other data will be written.
|
||||
"""
|
||||
|
||||
if x != d or self.forceDefaults:
|
||||
self.PrependUOffsetTRelative(x)
|
||||
self.Slot(o)
|
||||
|
||||
def PrependStructSlot(self, v, x, d):
|
||||
"""PrependStructSlot prepends a struct onto the object at vtable slot `o`.
|
||||
|
||||
Structs are stored inline, so nothing additional is being added. In
|
||||
generated code, `d` is always 0.
|
||||
"""
|
||||
|
||||
N.enforce_number(d, N.UOffsetTFlags)
|
||||
if x != d:
|
||||
self.assertStructIsInline(x)
|
||||
self.Slot(v)
|
||||
|
||||
## @endcond
|
||||
|
||||
def PrependBool(self, x):
|
||||
"""Prepend a `bool` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.BoolFlags, x)
|
||||
|
||||
def PrependByte(self, x):
|
||||
"""Prepend a `byte` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint8Flags, x)
|
||||
|
||||
def PrependUint8(self, x):
|
||||
"""Prepend an `uint8` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint8Flags, x)
|
||||
|
||||
def PrependUint16(self, x):
|
||||
"""Prepend an `uint16` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint16Flags, x)
|
||||
|
||||
def PrependUint32(self, x):
|
||||
"""Prepend an `uint32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint32Flags, x)
|
||||
|
||||
def PrependUint64(self, x):
|
||||
"""Prepend an `uint64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint64Flags, x)
|
||||
|
||||
def PrependInt8(self, x):
|
||||
"""Prepend an `int8` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int8Flags, x)
|
||||
|
||||
def PrependInt16(self, x):
|
||||
"""Prepend an `int16` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int16Flags, x)
|
||||
|
||||
def PrependInt32(self, x):
|
||||
"""Prepend an `int32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int32Flags, x)
|
||||
|
||||
def PrependInt64(self, x):
|
||||
"""Prepend an `int64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int64Flags, x)
|
||||
|
||||
def PrependFloat32(self, x):
|
||||
"""Prepend a `float32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Float32Flags, x)
|
||||
|
||||
def PrependFloat64(self, x):
|
||||
"""Prepend a `float64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Float64Flags, x)
|
||||
|
||||
def ForceDefaults(self, forceDefaults):
|
||||
"""In order to save space, fields that are set to their default value
|
||||
|
||||
don't get serialized into the buffer. Forcing defaults provides a
|
||||
way to manually disable this optimization. When set to `True`, will
|
||||
always serialize default values.
|
||||
"""
|
||||
self.forceDefaults = forceDefaults
|
||||
|
||||
##############################################################
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def PrependVOffsetT(self, x):
|
||||
self.Prepend(N.VOffsetTFlags, x)
|
||||
|
||||
def Place(self, x, flags):
|
||||
"""Place prepends a value specified by `flags` to the Builder,
|
||||
|
||||
without checking for available space.
|
||||
"""
|
||||
|
||||
N.enforce_number(x, flags)
|
||||
new_head = self.head - flags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(flags.packer_type, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceVOffsetT(self, x):
|
||||
"""PlaceVOffsetT prepends a VOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.VOffsetTFlags)
|
||||
new_head = self.head - N.VOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.voffset, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceSOffsetT(self, x):
|
||||
"""PlaceSOffsetT prepends a SOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.SOffsetTFlags)
|
||||
new_head = self.head - N.SOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.soffset, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceUOffsetT(self, x):
|
||||
"""PlaceUOffsetT prepends a UOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.UOffsetTFlags)
|
||||
new_head = self.head - N.UOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.uoffset, self.Bytes, new_head, x)
|
||||
|
||||
## @endcond
|
||||
|
||||
## @}
|
||||
@@ -0,0 +1,91 @@
|
||||
# Copyright 2016 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""A tiny version of `six` to help with backwards compability.
|
||||
|
||||
Also includes compatibility helpers for numpy.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
PY2 = sys.version_info[0] == 2
|
||||
PY26 = sys.version_info[0:2] == (2, 6)
|
||||
PY27 = sys.version_info[0:2] == (2, 7)
|
||||
PY275 = sys.version_info[0:3] >= (2, 7, 5)
|
||||
PY3 = sys.version_info[0] == 3
|
||||
PY34 = sys.version_info[0:2] >= (3, 4)
|
||||
|
||||
if PY3:
|
||||
import importlib.machinery
|
||||
|
||||
string_types = (str,)
|
||||
binary_types = (bytes, bytearray)
|
||||
range_func = range
|
||||
memoryview_type = memoryview
|
||||
struct_bool_decl = "?"
|
||||
else:
|
||||
import imp
|
||||
|
||||
string_types = (unicode,)
|
||||
if PY26 or PY27:
|
||||
binary_types = (str, bytearray)
|
||||
else:
|
||||
binary_types = (str,)
|
||||
range_func = xrange
|
||||
if PY26 or (PY27 and not PY275):
|
||||
memoryview_type = buffer
|
||||
struct_bool_decl = "<b"
|
||||
else:
|
||||
memoryview_type = memoryview
|
||||
struct_bool_decl = "?"
|
||||
|
||||
# Helper functions to facilitate making numpy optional instead of required
|
||||
|
||||
|
||||
def import_numpy():
|
||||
"""Returns the numpy module if it exists on the system,
|
||||
|
||||
otherwise returns None.
|
||||
"""
|
||||
if PY3:
|
||||
numpy_exists = importlib.machinery.PathFinder.find_spec("numpy") is not None
|
||||
else:
|
||||
try:
|
||||
imp.find_module("numpy")
|
||||
numpy_exists = True
|
||||
except ImportError:
|
||||
numpy_exists = False
|
||||
|
||||
if numpy_exists:
|
||||
# We do this outside of try/except block in case numpy exists
|
||||
# but is not installed correctly. We do not want to catch an
|
||||
# incorrect installation which would manifest as an
|
||||
# ImportError.
|
||||
import numpy as np
|
||||
else:
|
||||
np = None
|
||||
|
||||
return np
|
||||
|
||||
|
||||
class NumpyRequiredForThisFeature(RuntimeError):
|
||||
"""Error raised when user tries to use a feature that
|
||||
|
||||
requires numpy without having numpy installed.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
# NOTE: Future Jython support may require code here (look at `six`).
|
||||
@@ -0,0 +1,45 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import number_types as N
|
||||
from . import packer
|
||||
from .compat import memoryview_type
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
|
||||
np = import_numpy()
|
||||
|
||||
FILE_IDENTIFIER_LENGTH = 4
|
||||
|
||||
|
||||
def Get(packer_type, buf, head):
|
||||
"""Get decodes a value at buf[head] using `packer_type`."""
|
||||
return packer_type.unpack_from(memoryview_type(buf), head)[0]
|
||||
|
||||
|
||||
def GetVectorAsNumpy(numpy_type, buf, count, offset):
|
||||
"""GetVecAsNumpy decodes values starting at buf[head] as
|
||||
|
||||
`numpy_type`, where `numpy_type` is a numpy dtype.
|
||||
"""
|
||||
if np is not None:
|
||||
# TODO: could set .flags.writeable = False to make users jump through
|
||||
# hoops before modifying...
|
||||
return np.frombuffer(buf, dtype=numpy_type, count=count, offset=offset)
|
||||
else:
|
||||
raise NumpyRequiredForThisFeature('Numpy was not found.')
|
||||
|
||||
|
||||
def Write(packer_type, buf, head, n):
|
||||
"""Write encodes `n` at buf[head] using `packer_type`."""
|
||||
packer_type.pack_into(buf, head, n)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,182 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import collections
|
||||
import struct
|
||||
|
||||
from . import packer
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
|
||||
np = import_numpy()
|
||||
|
||||
# For reference, see:
|
||||
# https://docs.python.org/2/library/ctypes.html#ctypes-fundamental-data-types-2
|
||||
|
||||
# These classes could be collections.namedtuple instances, but those are new
|
||||
# in 2.6 and we want to work towards 2.5 compatability.
|
||||
|
||||
|
||||
class BoolFlags(object):
|
||||
bytewidth = 1
|
||||
min_val = False
|
||||
max_val = True
|
||||
py_type = bool
|
||||
name = "bool"
|
||||
packer_type = packer.boolean
|
||||
|
||||
|
||||
class Uint8Flags(object):
|
||||
bytewidth = 1
|
||||
min_val = 0
|
||||
max_val = (2**8) - 1
|
||||
py_type = int
|
||||
name = "uint8"
|
||||
packer_type = packer.uint8
|
||||
|
||||
|
||||
class Uint16Flags(object):
|
||||
bytewidth = 2
|
||||
min_val = 0
|
||||
max_val = (2**16) - 1
|
||||
py_type = int
|
||||
name = "uint16"
|
||||
packer_type = packer.uint16
|
||||
|
||||
|
||||
class Uint32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = 0
|
||||
max_val = (2**32) - 1
|
||||
py_type = int
|
||||
name = "uint32"
|
||||
packer_type = packer.uint32
|
||||
|
||||
|
||||
class Uint64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = 0
|
||||
max_val = (2**64) - 1
|
||||
py_type = int
|
||||
name = "uint64"
|
||||
packer_type = packer.uint64
|
||||
|
||||
|
||||
class Int8Flags(object):
|
||||
bytewidth = 1
|
||||
min_val = -(2**7)
|
||||
max_val = (2**7) - 1
|
||||
py_type = int
|
||||
name = "int8"
|
||||
packer_type = packer.int8
|
||||
|
||||
|
||||
class Int16Flags(object):
|
||||
bytewidth = 2
|
||||
min_val = -(2**15)
|
||||
max_val = (2**15) - 1
|
||||
py_type = int
|
||||
name = "int16"
|
||||
packer_type = packer.int16
|
||||
|
||||
|
||||
class Int32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = -(2**31)
|
||||
max_val = (2**31) - 1
|
||||
py_type = int
|
||||
name = "int32"
|
||||
packer_type = packer.int32
|
||||
|
||||
|
||||
class Int64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = -(2**63)
|
||||
max_val = (2**63) - 1
|
||||
py_type = int
|
||||
name = "int64"
|
||||
packer_type = packer.int64
|
||||
|
||||
|
||||
class Float32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = None
|
||||
max_val = None
|
||||
py_type = float
|
||||
name = "float32"
|
||||
packer_type = packer.float32
|
||||
|
||||
|
||||
class Float64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = None
|
||||
max_val = None
|
||||
py_type = float
|
||||
name = "float64"
|
||||
packer_type = packer.float64
|
||||
|
||||
|
||||
class SOffsetTFlags(Int32Flags):
|
||||
pass
|
||||
|
||||
|
||||
class UOffsetTFlags(Uint32Flags):
|
||||
pass
|
||||
|
||||
|
||||
class VOffsetTFlags(Uint16Flags):
|
||||
pass
|
||||
|
||||
|
||||
def valid_number(n, flags):
|
||||
if flags.min_val is None and flags.max_val is None:
|
||||
return True
|
||||
return flags.min_val <= n <= flags.max_val
|
||||
|
||||
|
||||
def enforce_number(n, flags):
|
||||
if flags.min_val is None and flags.max_val is None:
|
||||
return
|
||||
if not flags.min_val <= n <= flags.max_val:
|
||||
raise TypeError("bad number %s for type %s" % (str(n), flags.name))
|
||||
|
||||
|
||||
def float32_to_uint32(n):
|
||||
packed = struct.pack("<1f", n)
|
||||
(converted,) = struct.unpack("<1L", packed)
|
||||
return converted
|
||||
|
||||
|
||||
def uint32_to_float32(n):
|
||||
packed = struct.pack("<1L", n)
|
||||
(unpacked,) = struct.unpack("<1f", packed)
|
||||
return unpacked
|
||||
|
||||
|
||||
def float64_to_uint64(n):
|
||||
packed = struct.pack("<1d", n)
|
||||
(converted,) = struct.unpack("<1Q", packed)
|
||||
return converted
|
||||
|
||||
|
||||
def uint64_to_float64(n):
|
||||
packed = struct.pack("<1Q", n)
|
||||
(unpacked,) = struct.unpack("<1d", packed)
|
||||
return unpacked
|
||||
|
||||
|
||||
def to_numpy_type(number_type):
|
||||
if np is not None:
|
||||
return np.dtype(number_type.name).newbyteorder("<")
|
||||
else:
|
||||
raise NumpyRequiredForThisFeature("Numpy was not found.")
|
||||
@@ -0,0 +1,41 @@
|
||||
# Copyright 2016 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provide pre-compiled struct packers for encoding and decoding.
|
||||
|
||||
See: https://docs.python.org/2/library/struct.html#format-characters
|
||||
"""
|
||||
|
||||
import struct
|
||||
from . import compat
|
||||
|
||||
|
||||
boolean = struct.Struct(compat.struct_bool_decl)
|
||||
|
||||
uint8 = struct.Struct("<B")
|
||||
uint16 = struct.Struct("<H")
|
||||
uint32 = struct.Struct("<I")
|
||||
uint64 = struct.Struct("<Q")
|
||||
|
||||
int8 = struct.Struct("<b")
|
||||
int16 = struct.Struct("<h")
|
||||
int32 = struct.Struct("<i")
|
||||
int64 = struct.Struct("<q")
|
||||
|
||||
float32 = struct.Struct("<f")
|
||||
float64 = struct.Struct("<d")
|
||||
|
||||
uoffset = uint32
|
||||
soffset = int32
|
||||
voffset = uint16
|
||||
@@ -0,0 +1,148 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import encode
|
||||
from . import number_types as N
|
||||
|
||||
|
||||
class Table(object):
|
||||
"""Table wraps a byte slice and provides read access to its data.
|
||||
|
||||
The variable `Pos` indicates the root of the FlatBuffers object therein.
|
||||
"""
|
||||
|
||||
__slots__ = ("Bytes", "Pos")
|
||||
|
||||
def __init__(self, buf, pos):
|
||||
N.enforce_number(pos, N.UOffsetTFlags)
|
||||
|
||||
self.Bytes = buf
|
||||
self.Pos = pos
|
||||
|
||||
def Offset(self, vtableOffset):
|
||||
"""Offset provides access into the Table's vtable.
|
||||
|
||||
Deprecated fields are ignored by checking the vtable's length.
|
||||
"""
|
||||
|
||||
vtable = self.Pos - self.Get(N.SOffsetTFlags, self.Pos)
|
||||
vtableEnd = self.Get(N.VOffsetTFlags, vtable)
|
||||
if vtableOffset < vtableEnd:
|
||||
return self.Get(N.VOffsetTFlags, vtable + vtableOffset)
|
||||
return 0
|
||||
|
||||
def Indirect(self, off):
|
||||
"""Indirect retrieves the relative offset stored at `offset`."""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
return off + encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
|
||||
def String(self, off):
|
||||
"""String gets a string from data stored inside the flatbuffer."""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
start = off + N.UOffsetTFlags.bytewidth
|
||||
length = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
return bytes(self.Bytes[start : start + length])
|
||||
|
||||
def VectorLen(self, off):
|
||||
"""VectorLen retrieves the length of the vector whose offset is stored
|
||||
|
||||
at "off" in this object.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
ret = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
return ret
|
||||
|
||||
def Vector(self, off):
|
||||
"""Vector retrieves the start of data of the vector whose offset is
|
||||
|
||||
stored at "off" in this object.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
x = off + self.Get(N.UOffsetTFlags, off)
|
||||
# data starts after metadata containing the vector length
|
||||
x += N.UOffsetTFlags.bytewidth
|
||||
return x
|
||||
|
||||
def Union(self, t2, off):
|
||||
"""Union initializes any Table-derived type to point to the union at
|
||||
|
||||
the given offset.
|
||||
"""
|
||||
assert type(t2) is Table
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
t2.Pos = off + self.Get(N.UOffsetTFlags, off)
|
||||
t2.Bytes = self.Bytes
|
||||
|
||||
def Get(self, flags, off):
|
||||
"""Get retrieves a value of the type specified by `flags` at the
|
||||
|
||||
given offset.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
return flags.py_type(encode.Get(flags.packer_type, self.Bytes, off))
|
||||
|
||||
def GetSlot(self, slot, d, validator_flags):
|
||||
N.enforce_number(slot, N.VOffsetTFlags)
|
||||
if validator_flags is not None:
|
||||
N.enforce_number(d, validator_flags)
|
||||
off = self.Offset(slot)
|
||||
if off == 0:
|
||||
return d
|
||||
return self.Get(validator_flags, self.Pos + off)
|
||||
|
||||
def GetVectorAsNumpy(self, flags, off):
|
||||
"""GetVectorAsNumpy returns the vector that starts at `Vector(off)`
|
||||
|
||||
as a numpy array with the type specified by `flags`. The array is
|
||||
a `view` into Bytes, so modifying the returned array will
|
||||
modify Bytes in place.
|
||||
"""
|
||||
offset = self.Vector(off)
|
||||
length = self.VectorLen(off) # TODO: length accounts for bytewidth, right?
|
||||
numpy_dtype = N.to_numpy_type(flags)
|
||||
return encode.GetVectorAsNumpy(numpy_dtype, self.Bytes, length, offset)
|
||||
|
||||
def GetArrayAsNumpy(self, flags, off, length):
|
||||
"""GetArrayAsNumpy returns the array with fixed width that starts at `Vector(offset)`
|
||||
|
||||
with length `length` as a numpy array with the type specified by `flags`.
|
||||
The
|
||||
array is a `view` into Bytes so modifying the returned will modify Bytes in
|
||||
place.
|
||||
"""
|
||||
numpy_dtype = N.to_numpy_type(flags)
|
||||
return encode.GetVectorAsNumpy(numpy_dtype, self.Bytes, length, off)
|
||||
|
||||
def GetVOffsetTSlot(self, slot, d):
|
||||
"""GetVOffsetTSlot retrieves the VOffsetT that the given vtable location
|
||||
|
||||
points to. If the vtable value is zero, the default value `d`
|
||||
will be returned.
|
||||
"""
|
||||
|
||||
N.enforce_number(slot, N.VOffsetTFlags)
|
||||
N.enforce_number(d, N.VOffsetTFlags)
|
||||
|
||||
off = self.Offset(slot)
|
||||
if off == 0:
|
||||
return d
|
||||
return off
|
||||
@@ -0,0 +1,47 @@
|
||||
# Copyright 2017 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import encode
|
||||
from . import number_types
|
||||
from . import packer
|
||||
|
||||
|
||||
def GetSizePrefix(buf, offset):
|
||||
"""Extract the size prefix from a buffer."""
|
||||
return encode.Get(packer.int32, buf, offset)
|
||||
|
||||
|
||||
def GetBufferIdentifier(buf, offset, size_prefixed=False):
|
||||
"""Extract the file_identifier from a buffer"""
|
||||
if size_prefixed:
|
||||
# increase offset by size of UOffsetTFlags
|
||||
offset += number_types.UOffsetTFlags.bytewidth
|
||||
# increase offset by size of root table pointer
|
||||
offset += number_types.UOffsetTFlags.bytewidth
|
||||
# end of FILE_IDENTIFIER
|
||||
end = offset + encode.FILE_IDENTIFIER_LENGTH
|
||||
return buf[offset:end]
|
||||
|
||||
|
||||
def BufferHasIdentifier(buf, offset, file_identifier, size_prefixed=False):
|
||||
got = GetBufferIdentifier(buf, offset, size_prefixed=size_prefixed)
|
||||
return got == file_identifier
|
||||
|
||||
|
||||
def RemoveSizePrefix(buf, offset):
|
||||
"""Create a slice of a size-prefixed buffer that has
|
||||
|
||||
its position advanced just past the size prefix.
|
||||
"""
|
||||
return buf, offset + number_types.Int32Flags.bytewidth
|
||||
Binary file not shown.
@@ -0,0 +1,10 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
# Copyright 2007 Google Inc. All Rights Reserved.
|
||||
|
||||
__version__ = '6.33.5'
|
||||
@@ -0,0 +1,53 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Any helper APIs."""
|
||||
|
||||
from typing import Optional, TypeVar
|
||||
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf.message import Message
|
||||
|
||||
from google.protobuf.any_pb2 import Any
|
||||
|
||||
|
||||
_MessageT = TypeVar('_MessageT', bound=Message)
|
||||
|
||||
|
||||
def pack(
|
||||
msg: Message,
|
||||
type_url_prefix: Optional[str] = 'type.googleapis.com/',
|
||||
deterministic: Optional[bool] = None,
|
||||
) -> Any:
|
||||
any_msg = Any()
|
||||
any_msg.Pack(
|
||||
msg=msg, type_url_prefix=type_url_prefix, deterministic=deterministic
|
||||
)
|
||||
return any_msg
|
||||
|
||||
|
||||
def unpack(any_msg: Any, msg: Message) -> bool:
|
||||
return any_msg.Unpack(msg=msg)
|
||||
|
||||
|
||||
def unpack_as(any_msg: Any, message_type: type[_MessageT]) -> _MessageT:
|
||||
unpacked = message_type()
|
||||
if unpack(any_msg, unpacked):
|
||||
return unpacked
|
||||
else:
|
||||
raise TypeError(
|
||||
f'Attempted to unpack {type_name(any_msg)} to'
|
||||
f' {message_type.__qualname__}'
|
||||
)
|
||||
|
||||
|
||||
def type_name(any_msg: Any) -> str:
|
||||
return any_msg.TypeName()
|
||||
|
||||
|
||||
def is_type(any_msg: Any, des: descriptor.Descriptor) -> bool:
|
||||
return any_msg.Is(des)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/any.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/any.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"6\n\x03\x41ny\x12\x19\n\x08type_url\x18\x01 \x01(\tR\x07typeUrl\x12\x14\n\x05value\x18\x02 \x01(\x0cR\x05valueBv\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_ANY']._serialized_start=46
|
||||
_globals['_ANY']._serialized_end=100
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,47 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/api.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/api.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
|
||||
from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\xdb\x02\n\x03\x41pi\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x31\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.MethodR\x07methods\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x18\n\x07version\x18\x04 \x01(\tR\x07version\x12\x45\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12.\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.MixinR\x06mixins\x12/\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x08 \x01(\tR\x07\x65\x64ition\"\xd4\x02\n\x06Method\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12(\n\x10request_type_url\x18\x02 \x01(\tR\x0erequestTypeUrl\x12+\n\x11request_streaming\x18\x03 \x01(\x08R\x10requestStreaming\x12*\n\x11response_type_url\x18\x04 \x01(\tR\x0fresponseTypeUrl\x12-\n\x12response_streaming\x18\x05 \x01(\x08R\x11responseStreaming\x12\x31\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x33\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.SyntaxB\x02\x18\x01R\x06syntax\x12\x1c\n\x07\x65\x64ition\x18\x08 \x01(\tB\x02\x18\x01R\x07\x65\x64ition\"/\n\x05Mixin\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x12\n\x04root\x18\x02 \x01(\tR\x04rootBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_METHOD'].fields_by_name['syntax']._loaded_options = None
|
||||
_globals['_METHOD'].fields_by_name['syntax']._serialized_options = b'\030\001'
|
||||
_globals['_METHOD'].fields_by_name['edition']._loaded_options = None
|
||||
_globals['_METHOD'].fields_by_name['edition']._serialized_options = b'\030\001'
|
||||
_globals['_API']._serialized_start=113
|
||||
_globals['_API']._serialized_end=460
|
||||
_globals['_METHOD']._serialized_start=463
|
||||
_globals['_METHOD']._serialized_end=803
|
||||
_globals['_MIXIN']._serialized_start=805
|
||||
_globals['_MIXIN']._serialized_end=852
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
+46
@@ -0,0 +1,46 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/compiler/plugin.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/compiler/plugin.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"c\n\x07Version\x12\x14\n\x05major\x18\x01 \x01(\x05R\x05major\x12\x14\n\x05minor\x18\x02 \x01(\x05R\x05minor\x12\x14\n\x05patch\x18\x03 \x01(\x05R\x05patch\x12\x16\n\x06suffix\x18\x04 \x01(\tR\x06suffix\"\xcf\x02\n\x14\x43odeGeneratorRequest\x12(\n\x10\x66ile_to_generate\x18\x01 \x03(\tR\x0e\x66ileToGenerate\x12\x1c\n\tparameter\x18\x02 \x01(\tR\tparameter\x12\x43\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProtoR\tprotoFile\x12\\\n\x17source_file_descriptors\x18\x11 \x03(\x0b\x32$.google.protobuf.FileDescriptorProtoR\x15sourceFileDescriptors\x12L\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.VersionR\x0f\x63ompilerVersion\"\x85\x04\n\x15\x43odeGeneratorResponse\x12\x14\n\x05\x65rror\x18\x01 \x01(\tR\x05\x65rror\x12-\n\x12supported_features\x18\x02 \x01(\x04R\x11supportedFeatures\x12\'\n\x0fminimum_edition\x18\x03 \x01(\x05R\x0eminimumEdition\x12\'\n\x0fmaximum_edition\x18\x04 \x01(\x05R\x0emaximumEdition\x12H\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.FileR\x04\x66ile\x1a\xb1\x01\n\x04\x46ile\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\'\n\x0finsertion_point\x18\x02 \x01(\tR\x0einsertionPoint\x12\x18\n\x07\x63ontent\x18\x0f \x01(\tR\x07\x63ontent\x12R\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfoR\x11generatedCodeInfo\"W\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x12\x1d\n\x19\x46\x45\x41TURE_SUPPORTS_EDITIONS\x10\x02\x42r\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb\xaa\x02\x18Google.Protobuf.Compiler')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb\252\002\030Google.Protobuf.Compiler'
|
||||
_globals['_VERSION']._serialized_start=101
|
||||
_globals['_VERSION']._serialized_end=200
|
||||
_globals['_CODEGENERATORREQUEST']._serialized_start=203
|
||||
_globals['_CODEGENERATORREQUEST']._serialized_end=538
|
||||
_globals['_CODEGENERATORRESPONSE']._serialized_start=541
|
||||
_globals['_CODEGENERATORRESPONSE']._serialized_end=1058
|
||||
_globals['_CODEGENERATORRESPONSE_FILE']._serialized_start=792
|
||||
_globals['_CODEGENERATORRESPONSE_FILE']._serialized_end=969
|
||||
_globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_start=971
|
||||
_globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_end=1058
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
File diff suppressed because it is too large
Load Diff
+172
@@ -0,0 +1,172 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides a container for DescriptorProtos."""
|
||||
|
||||
__author__ = 'matthewtoia@google.com (Matt Toia)'
|
||||
|
||||
import warnings
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class DescriptorDatabaseConflictingDefinitionError(Error):
|
||||
"""Raised when a proto is added with the same name & different descriptor."""
|
||||
|
||||
|
||||
class DescriptorDatabase(object):
|
||||
"""A container accepting FileDescriptorProtos and maps DescriptorProtos."""
|
||||
|
||||
def __init__(self):
|
||||
self._file_desc_protos_by_file = {}
|
||||
self._file_desc_protos_by_symbol = {}
|
||||
|
||||
def Add(self, file_desc_proto):
|
||||
"""Adds the FileDescriptorProto and its types to this database.
|
||||
|
||||
Args:
|
||||
file_desc_proto: The FileDescriptorProto to add.
|
||||
Raises:
|
||||
DescriptorDatabaseConflictingDefinitionError: if an attempt is made to
|
||||
add a proto with the same name but different definition than an
|
||||
existing proto in the database.
|
||||
"""
|
||||
proto_name = file_desc_proto.name
|
||||
if proto_name not in self._file_desc_protos_by_file:
|
||||
self._file_desc_protos_by_file[proto_name] = file_desc_proto
|
||||
elif self._file_desc_protos_by_file[proto_name] != file_desc_proto:
|
||||
raise DescriptorDatabaseConflictingDefinitionError(
|
||||
'%s already added, but with different descriptor.' % proto_name)
|
||||
else:
|
||||
return
|
||||
|
||||
# Add all the top-level descriptors to the index.
|
||||
package = file_desc_proto.package
|
||||
for message in file_desc_proto.message_type:
|
||||
for name in _ExtractSymbols(message, package):
|
||||
self._AddSymbol(name, file_desc_proto)
|
||||
for enum in file_desc_proto.enum_type:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, enum.name)) if package else enum.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
for enum_value in enum.value:
|
||||
self._file_desc_protos_by_symbol[
|
||||
'.'.join((package, enum_value.name)) if package else enum_value.name
|
||||
] = file_desc_proto
|
||||
for extension in file_desc_proto.extension:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, extension.name)) if package else extension.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
for service in file_desc_proto.service:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, service.name)) if package else service.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
|
||||
def FindFileByName(self, name):
|
||||
"""Finds the file descriptor proto by file name.
|
||||
|
||||
Typically the file name is a relative path ending to a .proto file. The
|
||||
proto with the given name will have to have been added to this database
|
||||
using the Add method or else an error will be raised.
|
||||
|
||||
Args:
|
||||
name: The file name to find.
|
||||
|
||||
Returns:
|
||||
The file descriptor proto matching the name.
|
||||
|
||||
Raises:
|
||||
KeyError if no file by the given name was added.
|
||||
"""
|
||||
|
||||
return self._file_desc_protos_by_file[name]
|
||||
|
||||
def FindFileContainingSymbol(self, symbol):
|
||||
"""Finds the file descriptor proto containing the specified symbol.
|
||||
|
||||
The symbol should be a fully qualified name including the file descriptor's
|
||||
package and any containing messages. Some examples:
|
||||
|
||||
'some.package.name.Message'
|
||||
'some.package.name.Message.NestedEnum'
|
||||
'some.package.name.Message.some_field'
|
||||
|
||||
The file descriptor proto containing the specified symbol must be added to
|
||||
this database using the Add method or else an error will be raised.
|
||||
|
||||
Args:
|
||||
symbol: The fully qualified symbol name.
|
||||
|
||||
Returns:
|
||||
The file descriptor proto containing the symbol.
|
||||
|
||||
Raises:
|
||||
KeyError if no file contains the specified symbol.
|
||||
"""
|
||||
if symbol.count('.') == 1 and symbol[0] == '.':
|
||||
symbol = symbol.lstrip('.')
|
||||
warnings.warn(
|
||||
'Please remove the leading "." when '
|
||||
'FindFileContainingSymbol, this will turn to error '
|
||||
'in 2026 Jan.',
|
||||
RuntimeWarning,
|
||||
)
|
||||
try:
|
||||
return self._file_desc_protos_by_symbol[symbol]
|
||||
except KeyError:
|
||||
# Fields, enum values, and nested extensions are not in
|
||||
# _file_desc_protos_by_symbol. Try to find the top level
|
||||
# descriptor. Non-existent nested symbol under a valid top level
|
||||
# descriptor can also be found. The behavior is the same with
|
||||
# protobuf C++.
|
||||
top_level, _, _ = symbol.rpartition('.')
|
||||
try:
|
||||
return self._file_desc_protos_by_symbol[top_level]
|
||||
except KeyError:
|
||||
# Raise the original symbol as a KeyError for better diagnostics.
|
||||
raise KeyError(symbol)
|
||||
|
||||
def FindFileContainingExtension(self, extendee_name, extension_number):
|
||||
# TODO: implement this API.
|
||||
return None
|
||||
|
||||
def FindAllExtensionNumbers(self, extendee_name):
|
||||
# TODO: implement this API.
|
||||
return []
|
||||
|
||||
def _AddSymbol(self, name, file_desc_proto):
|
||||
if name in self._file_desc_protos_by_symbol:
|
||||
warn_msg = ('Conflict register for file "' + file_desc_proto.name +
|
||||
'": ' + name +
|
||||
' is already defined in file "' +
|
||||
self._file_desc_protos_by_symbol[name].name + '"')
|
||||
warnings.warn(warn_msg, RuntimeWarning)
|
||||
self._file_desc_protos_by_symbol[name] = file_desc_proto
|
||||
|
||||
|
||||
def _ExtractSymbols(desc_proto, package):
|
||||
"""Pulls out all the symbols from a descriptor proto.
|
||||
|
||||
Args:
|
||||
desc_proto: The proto to extract symbols from.
|
||||
package: The package containing the descriptor type.
|
||||
|
||||
Yields:
|
||||
The fully qualified name found in the descriptor.
|
||||
"""
|
||||
message_name = package + '.' + desc_proto.name if package else desc_proto.name
|
||||
yield message_name
|
||||
for nested_type in desc_proto.nested_type:
|
||||
for symbol in _ExtractSymbols(nested_type, message_name):
|
||||
yield symbol
|
||||
for enum_type in desc_proto.enum_type:
|
||||
yield '.'.join((message_name, enum_type.name))
|
||||
+3369
File diff suppressed because one or more lines are too long
+1373
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,100 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Duration helper APIs."""
|
||||
|
||||
import datetime
|
||||
|
||||
from google.protobuf.duration_pb2 import Duration
|
||||
|
||||
|
||||
def from_json_string(value: str) -> Duration:
|
||||
"""Converts a string to Duration.
|
||||
|
||||
Args:
|
||||
value: A string to be converted. The string must end with 's'. Any
|
||||
fractional digits (or none) are accepted as long as they fit into
|
||||
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s"
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
duration = Duration()
|
||||
duration.FromJsonString(value)
|
||||
return duration
|
||||
|
||||
|
||||
def from_microseconds(micros: float) -> Duration:
|
||||
"""Converts microseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromMicroseconds(micros)
|
||||
return duration
|
||||
|
||||
|
||||
def from_milliseconds(millis: float) -> Duration:
|
||||
"""Converts milliseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromMilliseconds(millis)
|
||||
return duration
|
||||
|
||||
|
||||
def from_nanoseconds(nanos: float) -> Duration:
|
||||
"""Converts nanoseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromNanoseconds(nanos)
|
||||
return duration
|
||||
|
||||
|
||||
def from_seconds(seconds: float) -> Duration:
|
||||
"""Converts seconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromSeconds(seconds)
|
||||
return duration
|
||||
|
||||
|
||||
def from_timedelta(td: datetime.timedelta) -> Duration:
|
||||
"""Converts timedelta to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromTimedelta(td)
|
||||
return duration
|
||||
|
||||
|
||||
def to_json_string(duration: Duration) -> str:
|
||||
"""Converts Duration to string format.
|
||||
|
||||
Returns:
|
||||
A string converted from self. The string format will contains
|
||||
3, 6, or 9 fractional digits depending on the precision required to
|
||||
represent the exact Duration value. For example: "1s", "1.010s",
|
||||
"1.000000100s", "-3.100s"
|
||||
"""
|
||||
return duration.ToJsonString()
|
||||
|
||||
|
||||
def to_microseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to microseconds."""
|
||||
return duration.ToMicroseconds()
|
||||
|
||||
|
||||
def to_milliseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to milliseconds."""
|
||||
return duration.ToMilliseconds()
|
||||
|
||||
|
||||
def to_nanoseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to nanoseconds."""
|
||||
return duration.ToNanoseconds()
|
||||
|
||||
|
||||
def to_seconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to seconds."""
|
||||
return duration.ToSeconds()
|
||||
|
||||
|
||||
def to_timedelta(duration: Duration) -> datetime.timedelta:
|
||||
"""Converts Duration to timedelta."""
|
||||
return duration.ToTimedelta()
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/duration.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/duration.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\":\n\x08\x44uration\x12\x18\n\x07seconds\x18\x01 \x01(\x03R\x07seconds\x12\x14\n\x05nanos\x18\x02 \x01(\x05R\x05nanosB\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_DURATION']._serialized_start=51
|
||||
_globals['_DURATION']._serialized_end=109
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/empty.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/empty.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_EMPTY']._serialized_start=48
|
||||
_globals['_EMPTY']._serialized_end=55
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/field_mask.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/field_mask.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"!\n\tFieldMask\x12\x14\n\x05paths\x18\x01 \x03(\tR\x05pathsB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_FIELDMASK']._serialized_start=53
|
||||
_globals['_FIELDMASK']._serialized_end=86
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
+7
@@ -0,0 +1,7 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
+136
@@ -0,0 +1,136 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Determine which implementation of the protobuf API is used in this process.
|
||||
"""
|
||||
|
||||
import importlib
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
_GOOGLE3_PYTHON_UPB_DEFAULT = True
|
||||
|
||||
|
||||
def _ApiVersionToImplementationType(api_version):
|
||||
if api_version == 2:
|
||||
return 'cpp'
|
||||
if api_version == 1:
|
||||
raise ValueError('api_version=1 is no longer supported.')
|
||||
if api_version == 0:
|
||||
return 'python'
|
||||
return None
|
||||
|
||||
|
||||
_implementation_type = None
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf.internal import _api_implementation
|
||||
# The compile-time constants in the _api_implementation module can be used to
|
||||
# switch to a certain implementation of the Python API at build time.
|
||||
_implementation_type = _ApiVersionToImplementationType(
|
||||
_api_implementation.api_version)
|
||||
except ImportError:
|
||||
pass # Unspecified by compiler flags.
|
||||
|
||||
|
||||
def _CanImport(mod_name):
|
||||
try:
|
||||
mod = importlib.import_module(mod_name)
|
||||
# Work around a known issue in the classic bootstrap .par import hook.
|
||||
if not mod:
|
||||
raise ImportError(mod_name + ' import succeeded but was None')
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
if _implementation_type is None:
|
||||
if _CanImport('google._upb._message'):
|
||||
_implementation_type = 'upb'
|
||||
elif _CanImport('google.protobuf.pyext._message'):
|
||||
_implementation_type = 'cpp'
|
||||
else:
|
||||
_implementation_type = 'python'
|
||||
|
||||
|
||||
# This environment variable can be used to switch to a certain implementation
|
||||
# of the Python API, overriding the compile-time constants in the
|
||||
# _api_implementation module. Right now only 'python', 'cpp' and 'upb' are
|
||||
# valid values. Any other value will raise error.
|
||||
_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION',
|
||||
_implementation_type)
|
||||
|
||||
if _implementation_type not in ('python', 'cpp', 'upb'):
|
||||
raise ValueError('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION {0} is not '
|
||||
'supported. Please set to \'python\', \'cpp\' or '
|
||||
'\'upb\'.'.format(_implementation_type))
|
||||
|
||||
if 'PyPy' in sys.version and _implementation_type == 'cpp':
|
||||
warnings.warn('PyPy does not work yet with cpp protocol buffers. '
|
||||
'Falling back to the python implementation.')
|
||||
_implementation_type = 'python'
|
||||
|
||||
_c_module = None
|
||||
|
||||
if _implementation_type == 'cpp':
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf.pyext import _message
|
||||
sys.modules['google3.net.proto2.python.internal.cpp._message'] = _message
|
||||
_c_module = _message
|
||||
del _message
|
||||
except ImportError:
|
||||
# TODO: fail back to python
|
||||
warnings.warn(
|
||||
'Selected implementation cpp is not available.')
|
||||
pass
|
||||
|
||||
if _implementation_type == 'upb':
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google._upb import _message
|
||||
_c_module = _message
|
||||
del _message
|
||||
except ImportError:
|
||||
warnings.warn('Selected implementation upb is not available. '
|
||||
'Falling back to the python implementation.')
|
||||
_implementation_type = 'python'
|
||||
pass
|
||||
|
||||
# Detect if serialization should be deterministic by default
|
||||
try:
|
||||
# The presence of this module in a build allows the proto implementation to
|
||||
# be upgraded merely via build deps.
|
||||
#
|
||||
# NOTE: Merely importing this automatically enables deterministic proto
|
||||
# serialization for C++ code, but we still need to export it as a boolean so
|
||||
# that we can do the same for `_implementation_type == 'python'`.
|
||||
#
|
||||
# NOTE2: It is possible for C++ code to enable deterministic serialization by
|
||||
# default _without_ affecting Python code, if the C++ implementation is not in
|
||||
# use by this module. That is intended behavior, so we don't actually expose
|
||||
# this boolean outside of this module.
|
||||
#
|
||||
# pylint: disable=g-import-not-at-top,unused-import
|
||||
from google.protobuf import enable_deterministic_proto_serialization
|
||||
_python_deterministic_proto_serialization = True
|
||||
except ImportError:
|
||||
_python_deterministic_proto_serialization = False
|
||||
|
||||
|
||||
# Usage of this function is discouraged. Clients shouldn't care which
|
||||
# implementation of the API is in use. Note that there is no guarantee
|
||||
# that differences between APIs will be maintained.
|
||||
# Please don't use this function if possible.
|
||||
def Type():
|
||||
return _implementation_type
|
||||
|
||||
|
||||
# For internal use only
|
||||
def IsPythonDefaultSerializationDeterministic():
|
||||
return _python_deterministic_proto_serialization
|
||||
+118
@@ -0,0 +1,118 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Builds descriptors, message classes and services for generated _pb2.py.
|
||||
|
||||
This file is only called in python generated _pb2.py files. It builds
|
||||
descriptors, message classes and services that users can directly use
|
||||
in generated code.
|
||||
"""
|
||||
|
||||
__author__ = 'jieluo@google.com (Jie Luo)'
|
||||
|
||||
from google.protobuf.internal import enum_type_wrapper
|
||||
from google.protobuf.internal import python_message
|
||||
from google.protobuf import message as _message
|
||||
from google.protobuf import reflection as _reflection
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
def BuildMessageAndEnumDescriptors(file_des, module):
|
||||
"""Builds message and enum descriptors.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
|
||||
def BuildNestedDescriptors(msg_des, prefix):
|
||||
for (name, nested_msg) in msg_des.nested_types_by_name.items():
|
||||
module_name = prefix + name.upper()
|
||||
module[module_name] = nested_msg
|
||||
BuildNestedDescriptors(nested_msg, module_name + '_')
|
||||
for enum_des in msg_des.enum_types:
|
||||
module[prefix + enum_des.name.upper()] = enum_des
|
||||
|
||||
for (name, msg_des) in file_des.message_types_by_name.items():
|
||||
module_name = '_' + name.upper()
|
||||
module[module_name] = msg_des
|
||||
BuildNestedDescriptors(msg_des, module_name + '_')
|
||||
|
||||
|
||||
def BuildTopDescriptorsAndMessages(file_des, module_name, module):
|
||||
"""Builds top level descriptors and message classes.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module_name: str, the name of generated _pb2 module
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
|
||||
def BuildMessage(msg_des, prefix):
|
||||
create_dict = {}
|
||||
for (name, nested_msg) in msg_des.nested_types_by_name.items():
|
||||
create_dict[name] = BuildMessage(nested_msg, prefix + msg_des.name + '.')
|
||||
create_dict['DESCRIPTOR'] = msg_des
|
||||
create_dict['__module__'] = module_name
|
||||
create_dict['__qualname__'] = prefix + msg_des.name
|
||||
message_class = _reflection.GeneratedProtocolMessageType(
|
||||
msg_des.name, (_message.Message,), create_dict)
|
||||
_sym_db.RegisterMessage(message_class)
|
||||
return message_class
|
||||
|
||||
# top level enums
|
||||
for (name, enum_des) in file_des.enum_types_by_name.items():
|
||||
module['_' + name.upper()] = enum_des
|
||||
module[name] = enum_type_wrapper.EnumTypeWrapper(enum_des)
|
||||
for enum_value in enum_des.values:
|
||||
module[enum_value.name] = enum_value.number
|
||||
|
||||
# top level extensions
|
||||
for (name, extension_des) in file_des.extensions_by_name.items():
|
||||
module[name.upper() + '_FIELD_NUMBER'] = extension_des.number
|
||||
module[name] = extension_des
|
||||
|
||||
# services
|
||||
for (name, service) in file_des.services_by_name.items():
|
||||
module['_' + name.upper()] = service
|
||||
|
||||
# Build messages.
|
||||
for (name, msg_des) in file_des.message_types_by_name.items():
|
||||
module[name] = BuildMessage(msg_des, '')
|
||||
|
||||
|
||||
def AddHelpersToExtensions(file_des):
|
||||
"""no-op to keep old generated code work with new runtime.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
"""
|
||||
# TODO: Remove this on-op
|
||||
return
|
||||
|
||||
|
||||
def BuildServices(file_des, module_name, module):
|
||||
"""Builds services classes and services stub class.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module_name: str, the name of generated _pb2 module
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf import service_reflection
|
||||
# pylint: enable=g-import-not-at-top
|
||||
for (name, service) in file_des.services_by_name.items():
|
||||
module[name] = service_reflection.GeneratedServiceType(
|
||||
name, (),
|
||||
dict(DESCRIPTOR=service, __module__=module_name))
|
||||
stub_name = name + '_Stub'
|
||||
module[stub_name] = service_reflection.GeneratedServiceStubType(
|
||||
stub_name, (module[name],),
|
||||
dict(DESCRIPTOR=service, __module__=module_name))
|
||||
+690
@@ -0,0 +1,690 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains container classes to represent different protocol buffer types.
|
||||
|
||||
This file defines container classes which represent categories of protocol
|
||||
buffer field types which need extra maintenance. Currently these categories
|
||||
are:
|
||||
|
||||
- Repeated scalar fields - These are all repeated fields which aren't
|
||||
composite (e.g. they are of simple types like int32, string, etc).
|
||||
- Repeated composite fields - Repeated fields which are composite. This
|
||||
includes groups and nested messages.
|
||||
"""
|
||||
|
||||
import collections.abc
|
||||
import copy
|
||||
import pickle
|
||||
from typing import (
|
||||
Any,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
MutableMapping,
|
||||
MutableSequence,
|
||||
NoReturn,
|
||||
Optional,
|
||||
Sequence,
|
||||
TypeVar,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
|
||||
_T = TypeVar('_T')
|
||||
_K = TypeVar('_K')
|
||||
_V = TypeVar('_V')
|
||||
|
||||
|
||||
class BaseContainer(Sequence[_T]):
|
||||
"""Base container class."""
|
||||
|
||||
# Minimizes memory usage and disallows assignment to other attributes.
|
||||
__slots__ = ['_message_listener', '_values']
|
||||
|
||||
def __init__(self, message_listener: Any) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The RepeatedScalarFieldContainer will call this object's
|
||||
Modified() method when it is modified.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._values = []
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: int) -> _T:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: slice) -> List[_T]:
|
||||
...
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Retrieves item by the specified key."""
|
||||
return self._values[key]
|
||||
|
||||
def __len__(self) -> int:
|
||||
"""Returns the number of elements in the container."""
|
||||
return len(self._values)
|
||||
|
||||
def __ne__(self, other: Any) -> bool:
|
||||
"""Checks if another instance isn't equal to this one."""
|
||||
# The concrete classes should define __eq__.
|
||||
return not self == other
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def sort(self, *args, **kwargs) -> None:
|
||||
# Continue to support the old sort_function keyword argument.
|
||||
# This is expected to be a rare occurrence, so use LBYL to avoid
|
||||
# the overhead of actually catching KeyError.
|
||||
if 'sort_function' in kwargs:
|
||||
kwargs['cmp'] = kwargs.pop('sort_function')
|
||||
self._values.sort(*args, **kwargs)
|
||||
|
||||
def reverse(self) -> None:
|
||||
self._values.reverse()
|
||||
|
||||
|
||||
# TODO: Remove this. BaseContainer does *not* conform to
|
||||
# MutableSequence, only its subclasses do.
|
||||
collections.abc.MutableSequence.register(BaseContainer)
|
||||
|
||||
|
||||
class RepeatedScalarFieldContainer(BaseContainer[_T], MutableSequence[_T]):
|
||||
"""Simple, type-checked, list-like container for holding repeated scalars."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_type_checker']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
type_checker: Any,
|
||||
) -> None:
|
||||
"""Args:
|
||||
|
||||
message_listener: A MessageListener implementation. The
|
||||
RepeatedScalarFieldContainer will call this object's Modified() method
|
||||
when it is modified.
|
||||
type_checker: A type_checkers.ValueChecker instance to run on elements
|
||||
inserted into this container.
|
||||
"""
|
||||
super().__init__(message_listener)
|
||||
self._type_checker = type_checker
|
||||
|
||||
def append(self, value: _T) -> None:
|
||||
"""Appends an item to the list. Similar to list.append()."""
|
||||
self._values.append(self._type_checker.CheckValue(value))
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def insert(self, key: int, value: _T) -> None:
|
||||
"""Inserts the item at the specified position. Similar to list.insert()."""
|
||||
self._values.insert(key, self._type_checker.CheckValue(value))
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def extend(self, elem_seq: Iterable[_T]) -> None:
|
||||
"""Extends by appending the given iterable. Similar to list.extend()."""
|
||||
elem_seq_iter = iter(elem_seq)
|
||||
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
|
||||
if new_values:
|
||||
self._values.extend(new_values)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def MergeFrom(
|
||||
self,
|
||||
other: Union['RepeatedScalarFieldContainer[_T]', Iterable[_T]],
|
||||
) -> None:
|
||||
"""Appends the contents of another repeated field of the same type to this
|
||||
one. We do not check the types of the individual fields.
|
||||
"""
|
||||
self._values.extend(other)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def remove(self, elem: _T):
|
||||
"""Removes an item from the list. Similar to list.remove()."""
|
||||
self._values.remove(elem)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def pop(self, key: Optional[int] = -1) -> _T:
|
||||
"""Removes and returns an item at a given index. Similar to list.pop()."""
|
||||
value = self._values[key]
|
||||
self.__delitem__(key)
|
||||
return value
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: int, value: _T) -> None:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
|
||||
...
|
||||
|
||||
def __setitem__(self, key, value) -> None:
|
||||
"""Sets the item on the specified position."""
|
||||
if isinstance(key, slice):
|
||||
if key.step is not None:
|
||||
raise ValueError('Extended slices not supported')
|
||||
self._values[key] = map(self._type_checker.CheckValue, value)
|
||||
self._message_listener.Modified()
|
||||
else:
|
||||
self._values[key] = self._type_checker.CheckValue(value)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __delitem__(self, key: Union[int, slice]) -> None:
|
||||
"""Deletes the item at the specified position."""
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
"""Compares the current instance with another one."""
|
||||
if self is other:
|
||||
return True
|
||||
# Special case for the same type which should be common and fast.
|
||||
if isinstance(other, self.__class__):
|
||||
return other._values == self._values
|
||||
# We are presumably comparing against some other sequence type.
|
||||
return other == self._values
|
||||
|
||||
def __deepcopy__(
|
||||
self,
|
||||
unused_memo: Any = None,
|
||||
) -> 'RepeatedScalarFieldContainer[_T]':
|
||||
clone = RepeatedScalarFieldContainer(
|
||||
copy.deepcopy(self._message_listener), self._type_checker)
|
||||
clone.MergeFrom(self)
|
||||
return clone
|
||||
|
||||
def __reduce__(self, **kwargs) -> NoReturn:
|
||||
raise pickle.PickleError(
|
||||
"Can't pickle repeated scalar fields, convert to list first")
|
||||
|
||||
|
||||
# TODO: Constrain T to be a subtype of Message.
|
||||
class RepeatedCompositeFieldContainer(BaseContainer[_T], MutableSequence[_T]):
|
||||
"""Simple, list-like container for holding repeated composite fields."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_message_descriptor']
|
||||
|
||||
def __init__(self, message_listener: Any, message_descriptor: Any) -> None:
|
||||
"""
|
||||
Note that we pass in a descriptor instead of the generated directly,
|
||||
since at the time we construct a _RepeatedCompositeFieldContainer we
|
||||
haven't yet necessarily initialized the type that will be contained in the
|
||||
container.
|
||||
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The RepeatedCompositeFieldContainer will call this object's
|
||||
Modified() method when it is modified.
|
||||
message_descriptor: A Descriptor instance describing the protocol type
|
||||
that should be present in this container. We'll use the
|
||||
_concrete_class field of this descriptor when the client calls add().
|
||||
"""
|
||||
super().__init__(message_listener)
|
||||
self._message_descriptor = message_descriptor
|
||||
|
||||
def add(self, **kwargs: Any) -> _T:
|
||||
"""Adds a new element at the end of the list and returns it. Keyword
|
||||
arguments may be used to initialize the element.
|
||||
"""
|
||||
new_element = self._message_descriptor._concrete_class(**kwargs)
|
||||
new_element._SetListener(self._message_listener)
|
||||
self._values.append(new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
return new_element
|
||||
|
||||
def append(self, value: _T) -> None:
|
||||
"""Appends one element by copying the message."""
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
new_element.CopyFrom(value)
|
||||
self._values.append(new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def insert(self, key: int, value: _T) -> None:
|
||||
"""Inserts the item at the specified position by copying."""
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
new_element.CopyFrom(value)
|
||||
self._values.insert(key, new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def extend(self, elem_seq: Iterable[_T]) -> None:
|
||||
"""Extends by appending the given sequence of elements of the same type
|
||||
|
||||
as this one, copying each individual message.
|
||||
"""
|
||||
message_class = self._message_descriptor._concrete_class
|
||||
listener = self._message_listener
|
||||
values = self._values
|
||||
for message in elem_seq:
|
||||
new_element = message_class()
|
||||
new_element._SetListener(listener)
|
||||
new_element.MergeFrom(message)
|
||||
values.append(new_element)
|
||||
listener.Modified()
|
||||
|
||||
def MergeFrom(
|
||||
self,
|
||||
other: Union['RepeatedCompositeFieldContainer[_T]', Iterable[_T]],
|
||||
) -> None:
|
||||
"""Appends the contents of another repeated field of the same type to this
|
||||
one, copying each individual message.
|
||||
"""
|
||||
self.extend(other)
|
||||
|
||||
def remove(self, elem: _T) -> None:
|
||||
"""Removes an item from the list. Similar to list.remove()."""
|
||||
self._values.remove(elem)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def pop(self, key: Optional[int] = -1) -> _T:
|
||||
"""Removes and returns an item at a given index. Similar to list.pop()."""
|
||||
value = self._values[key]
|
||||
self.__delitem__(key)
|
||||
return value
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: int, value: _T) -> None:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
|
||||
...
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
# This method is implemented to make RepeatedCompositeFieldContainer
|
||||
# structurally compatible with typing.MutableSequence. It is
|
||||
# otherwise unsupported and will always raise an error.
|
||||
raise TypeError(
|
||||
f'{self.__class__.__name__} object does not support item assignment')
|
||||
|
||||
def __delitem__(self, key: Union[int, slice]) -> None:
|
||||
"""Deletes the item at the specified position."""
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
"""Compares the current instance with another one."""
|
||||
if self is other:
|
||||
return True
|
||||
if not isinstance(other, self.__class__):
|
||||
raise TypeError('Can only compare repeated composite fields against '
|
||||
'other repeated composite fields.')
|
||||
return self._values == other._values
|
||||
|
||||
|
||||
class ScalarMap(MutableMapping[_K, _V]):
|
||||
"""Simple, type-checked, dict-like container for holding repeated scalars."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener',
|
||||
'_entry_descriptor']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
key_checker: Any,
|
||||
value_checker: Any,
|
||||
entry_descriptor: Any,
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The ScalarMap will call this object's Modified() method when it
|
||||
is modified.
|
||||
key_checker: A type_checkers.ValueChecker instance to run on keys
|
||||
inserted into this container.
|
||||
value_checker: A type_checkers.ValueChecker instance to run on values
|
||||
inserted into this container.
|
||||
entry_descriptor: The MessageDescriptor of a map entry: key and value.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._key_checker = key_checker
|
||||
self._value_checker = value_checker
|
||||
self._entry_descriptor = entry_descriptor
|
||||
self._values = {}
|
||||
|
||||
def __getitem__(self, key: _K) -> _V:
|
||||
try:
|
||||
return self._values[key]
|
||||
except KeyError:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
val = self._value_checker.DefaultValue()
|
||||
self._values[key] = val
|
||||
return val
|
||||
|
||||
def __contains__(self, item: _K) -> bool:
|
||||
# We check the key's type to match the strong-typing flavor of the API.
|
||||
# Also this makes it easier to match the behavior of the C++ implementation.
|
||||
self._key_checker.CheckValue(item)
|
||||
return item in self._values
|
||||
|
||||
@overload
|
||||
def get(self, key: _K) -> Optional[_V]:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get(self, key: _K, default: _T) -> Union[_V, _T]:
|
||||
...
|
||||
|
||||
# We need to override this explicitly, because our defaultdict-like behavior
|
||||
# will make the default implementation (from our base class) always insert
|
||||
# the key.
|
||||
def get(self, key, default=None):
|
||||
if key in self:
|
||||
return self[key]
|
||||
else:
|
||||
return default
|
||||
|
||||
def __setitem__(self, key: _K, value: _V) -> _T:
|
||||
checked_key = self._key_checker.CheckValue(key)
|
||||
checked_value = self._value_checker.CheckValue(value)
|
||||
self._values[checked_key] = checked_value
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __delitem__(self, key: _K) -> None:
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._values)
|
||||
|
||||
def __iter__(self) -> Iterator[_K]:
|
||||
return iter(self._values)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def setdefault(self, key: _K, value: Optional[_V] = None) -> _V:
|
||||
if value == None:
|
||||
raise ValueError('The value for scalar map setdefault must be set.')
|
||||
if key not in self._values:
|
||||
self.__setitem__(key, value)
|
||||
return self[key]
|
||||
|
||||
def MergeFrom(self, other: 'ScalarMap[_K, _V]') -> None:
|
||||
self._values.update(other._values)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def InvalidateIterators(self) -> None:
|
||||
# It appears that the only way to reliably invalidate iterators to
|
||||
# self._values is to ensure that its size changes.
|
||||
original = self._values
|
||||
self._values = original.copy()
|
||||
original[None] = None
|
||||
|
||||
# This is defined in the abstract base, but we can do it much more cheaply.
|
||||
def clear(self) -> None:
|
||||
self._values.clear()
|
||||
self._message_listener.Modified()
|
||||
|
||||
def GetEntryClass(self) -> Any:
|
||||
return self._entry_descriptor._concrete_class
|
||||
|
||||
|
||||
class MessageMap(MutableMapping[_K, _V]):
|
||||
"""Simple, type-checked, dict-like container for with submessage values."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_key_checker', '_values', '_message_listener',
|
||||
'_message_descriptor', '_entry_descriptor']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
message_descriptor: Any,
|
||||
key_checker: Any,
|
||||
entry_descriptor: Any,
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The ScalarMap will call this object's Modified() method when it
|
||||
is modified.
|
||||
key_checker: A type_checkers.ValueChecker instance to run on keys
|
||||
inserted into this container.
|
||||
value_checker: A type_checkers.ValueChecker instance to run on values
|
||||
inserted into this container.
|
||||
entry_descriptor: The MessageDescriptor of a map entry: key and value.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._message_descriptor = message_descriptor
|
||||
self._key_checker = key_checker
|
||||
self._entry_descriptor = entry_descriptor
|
||||
self._values = {}
|
||||
|
||||
def __getitem__(self, key: _K) -> _V:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
try:
|
||||
return self._values[key]
|
||||
except KeyError:
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
self._values[key] = new_element
|
||||
self._message_listener.Modified()
|
||||
return new_element
|
||||
|
||||
def get_or_create(self, key: _K) -> _V:
|
||||
"""get_or_create() is an alias for getitem (ie. map[key]).
|
||||
|
||||
Args:
|
||||
key: The key to get or create in the map.
|
||||
|
||||
This is useful in cases where you want to be explicit that the call is
|
||||
mutating the map. This can avoid lint errors for statements like this
|
||||
that otherwise would appear to be pointless statements:
|
||||
|
||||
msg.my_map[key]
|
||||
"""
|
||||
return self[key]
|
||||
|
||||
@overload
|
||||
def get(self, key: _K) -> Optional[_V]:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get(self, key: _K, default: _T) -> Union[_V, _T]:
|
||||
...
|
||||
|
||||
# We need to override this explicitly, because our defaultdict-like behavior
|
||||
# will make the default implementation (from our base class) always insert
|
||||
# the key.
|
||||
def get(self, key, default=None):
|
||||
if key in self:
|
||||
return self[key]
|
||||
else:
|
||||
return default
|
||||
|
||||
def __contains__(self, item: _K) -> bool:
|
||||
item = self._key_checker.CheckValue(item)
|
||||
return item in self._values
|
||||
|
||||
def __setitem__(self, key: _K, value: _V) -> NoReturn:
|
||||
raise ValueError('May not set values directly, call my_map[key].foo = 5')
|
||||
|
||||
def __delitem__(self, key: _K) -> None:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._values)
|
||||
|
||||
def __iter__(self) -> Iterator[_K]:
|
||||
return iter(self._values)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def setdefault(self, key: _K, value: Optional[_V] = None) -> _V:
|
||||
raise NotImplementedError(
|
||||
'Set message map value directly is not supported, call'
|
||||
' my_map[key].foo = 5'
|
||||
)
|
||||
|
||||
def MergeFrom(self, other: 'MessageMap[_K, _V]') -> None:
|
||||
# pylint: disable=protected-access
|
||||
for key in other._values:
|
||||
# According to documentation: "When parsing from the wire or when merging,
|
||||
# if there are duplicate map keys the last key seen is used".
|
||||
if key in self:
|
||||
del self[key]
|
||||
self[key].CopyFrom(other[key])
|
||||
# self._message_listener.Modified() not required here, because
|
||||
# mutations to submessages already propagate.
|
||||
|
||||
def InvalidateIterators(self) -> None:
|
||||
# It appears that the only way to reliably invalidate iterators to
|
||||
# self._values is to ensure that its size changes.
|
||||
original = self._values
|
||||
self._values = original.copy()
|
||||
original[None] = None
|
||||
|
||||
# This is defined in the abstract base, but we can do it much more cheaply.
|
||||
def clear(self) -> None:
|
||||
self._values.clear()
|
||||
self._message_listener.Modified()
|
||||
|
||||
def GetEntryClass(self) -> Any:
|
||||
return self._entry_descriptor._concrete_class
|
||||
|
||||
|
||||
class _UnknownField:
|
||||
"""A parsed unknown field."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_field_number', '_wire_type', '_data']
|
||||
|
||||
def __init__(self, field_number, wire_type, data):
|
||||
self._field_number = field_number
|
||||
self._wire_type = wire_type
|
||||
self._data = data
|
||||
return
|
||||
|
||||
def __lt__(self, other):
|
||||
# pylint: disable=protected-access
|
||||
return self._field_number < other._field_number
|
||||
|
||||
def __eq__(self, other):
|
||||
if self is other:
|
||||
return True
|
||||
# pylint: disable=protected-access
|
||||
return (self._field_number == other._field_number and
|
||||
self._wire_type == other._wire_type and
|
||||
self._data == other._data)
|
||||
|
||||
|
||||
class UnknownFieldRef: # pylint: disable=missing-class-docstring
|
||||
|
||||
def __init__(self, parent, index):
|
||||
self._parent = parent
|
||||
self._index = index
|
||||
|
||||
def _check_valid(self):
|
||||
if not self._parent:
|
||||
raise ValueError('UnknownField does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
if self._index >= len(self._parent):
|
||||
raise ValueError('UnknownField does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
|
||||
@property
|
||||
def field_number(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._field_number
|
||||
|
||||
@property
|
||||
def wire_type(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._wire_type
|
||||
|
||||
@property
|
||||
def data(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._data
|
||||
|
||||
|
||||
class UnknownFieldSet:
|
||||
"""UnknownField container"""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_values']
|
||||
|
||||
def __init__(self):
|
||||
self._values = []
|
||||
|
||||
def __getitem__(self, index):
|
||||
if self._values is None:
|
||||
raise ValueError('UnknownFields does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
size = len(self._values)
|
||||
if index < 0:
|
||||
index += size
|
||||
if index < 0 or index >= size:
|
||||
raise IndexError('index %d out of range'.index)
|
||||
|
||||
return UnknownFieldRef(self, index)
|
||||
|
||||
def _internal_get(self, index):
|
||||
return self._values[index]
|
||||
|
||||
def __len__(self):
|
||||
if self._values is None:
|
||||
raise ValueError('UnknownFields does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
return len(self._values)
|
||||
|
||||
def _add(self, field_number, wire_type, data):
|
||||
unknown_field = _UnknownField(field_number, wire_type, data)
|
||||
self._values.append(unknown_field)
|
||||
return unknown_field
|
||||
|
||||
def __iter__(self):
|
||||
for i in range(len(self)):
|
||||
yield UnknownFieldRef(self, i)
|
||||
|
||||
def _extend(self, other):
|
||||
if other is None:
|
||||
return
|
||||
# pylint: disable=protected-access
|
||||
self._values.extend(other._values)
|
||||
|
||||
def __eq__(self, other):
|
||||
if self is other:
|
||||
return True
|
||||
# Sort unknown fields because their order shouldn't
|
||||
# affect equality test.
|
||||
values = list(self._values)
|
||||
if other is None:
|
||||
return not values
|
||||
values.sort()
|
||||
# pylint: disable=protected-access
|
||||
other_values = sorted(other._values)
|
||||
return values == other_values
|
||||
|
||||
def _clear(self):
|
||||
for value in self._values:
|
||||
# pylint: disable=protected-access
|
||||
if isinstance(value._data, UnknownFieldSet):
|
||||
value._data._clear() # pylint: disable=protected-access
|
||||
self._values = None
|
||||
+1066
File diff suppressed because it is too large
Load Diff
+806
@@ -0,0 +1,806 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Code for encoding protocol message primitives.
|
||||
|
||||
Contains the logic for encoding every logical protocol field type
|
||||
into one of the 5 physical wire types.
|
||||
|
||||
This code is designed to push the Python interpreter's performance to the
|
||||
limits.
|
||||
|
||||
The basic idea is that at startup time, for every field (i.e. every
|
||||
FieldDescriptor) we construct two functions: a "sizer" and an "encoder". The
|
||||
sizer takes a value of this field's type and computes its byte size. The
|
||||
encoder takes a writer function and a value. It encodes the value into byte
|
||||
strings and invokes the writer function to write those strings. Typically the
|
||||
writer function is the write() method of a BytesIO.
|
||||
|
||||
We try to do as much work as possible when constructing the writer and the
|
||||
sizer rather than when calling them. In particular:
|
||||
* We copy any needed global functions to local variables, so that we do not need
|
||||
to do costly global table lookups at runtime.
|
||||
* Similarly, we try to do any attribute lookups at startup time if possible.
|
||||
* Every field's tag is encoded to bytes at startup, since it can't change at
|
||||
runtime.
|
||||
* Whatever component of the field size we can compute at startup, we do.
|
||||
* We *avoid* sharing code if doing so would make the code slower and not sharing
|
||||
does not burden us too much. For example, encoders for repeated fields do
|
||||
not just call the encoders for singular fields in a loop because this would
|
||||
add an extra function call overhead for every loop iteration; instead, we
|
||||
manually inline the single-value encoder into the loop.
|
||||
* If a Python function lacks a return statement, Python actually generates
|
||||
instructions to pop the result of the last statement off the stack, push
|
||||
None onto the stack, and then return that. If we really don't care what
|
||||
value is returned, then we can save two instructions by returning the
|
||||
result of the last statement. It looks funny but it helps.
|
||||
* We assume that type and bounds checking has happened at a higher level.
|
||||
"""
|
||||
|
||||
__author__ = 'kenton@google.com (Kenton Varda)'
|
||||
|
||||
import struct
|
||||
|
||||
from google.protobuf.internal import wire_format
|
||||
|
||||
|
||||
# This will overflow and thus become IEEE-754 "infinity". We would use
|
||||
# "float('inf')" but it doesn't work on Windows pre-Python-2.6.
|
||||
_POS_INF = 1e10000
|
||||
_NEG_INF = -_POS_INF
|
||||
|
||||
|
||||
def _VarintSize(value):
|
||||
"""Compute the size of a varint value."""
|
||||
if value <= 0x7f: return 1
|
||||
if value <= 0x3fff: return 2
|
||||
if value <= 0x1fffff: return 3
|
||||
if value <= 0xfffffff: return 4
|
||||
if value <= 0x7ffffffff: return 5
|
||||
if value <= 0x3ffffffffff: return 6
|
||||
if value <= 0x1ffffffffffff: return 7
|
||||
if value <= 0xffffffffffffff: return 8
|
||||
if value <= 0x7fffffffffffffff: return 9
|
||||
return 10
|
||||
|
||||
|
||||
def _SignedVarintSize(value):
|
||||
"""Compute the size of a signed varint value."""
|
||||
if value < 0: return 10
|
||||
if value <= 0x7f: return 1
|
||||
if value <= 0x3fff: return 2
|
||||
if value <= 0x1fffff: return 3
|
||||
if value <= 0xfffffff: return 4
|
||||
if value <= 0x7ffffffff: return 5
|
||||
if value <= 0x3ffffffffff: return 6
|
||||
if value <= 0x1ffffffffffff: return 7
|
||||
if value <= 0xffffffffffffff: return 8
|
||||
if value <= 0x7fffffffffffffff: return 9
|
||||
return 10
|
||||
|
||||
|
||||
def _TagSize(field_number):
|
||||
"""Returns the number of bytes required to serialize a tag with this field
|
||||
number."""
|
||||
# Just pass in type 0, since the type won't affect the tag+type size.
|
||||
return _VarintSize(wire_format.PackTag(field_number, 0))
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# In this section we define some generic sizers. Each of these functions
|
||||
# takes parameters specific to a particular field type, e.g. int32 or fixed64.
|
||||
# It returns another function which in turn takes parameters specific to a
|
||||
# particular field, e.g. the field number and whether it is repeated or packed.
|
||||
# Look at the next section to see how these are used.
|
||||
|
||||
|
||||
def _SimpleSizer(compute_value_size):
|
||||
"""A sizer which uses the function compute_value_size to compute the size of
|
||||
each value. Typically compute_value_size is _VarintSize."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = 0
|
||||
for element in value:
|
||||
result += compute_value_size(element)
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += compute_value_size(element)
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + compute_value_size(value)
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
def _ModifiedSizer(compute_value_size, modify_value):
|
||||
"""Like SimpleSizer, but modify_value is invoked on each value before it is
|
||||
passed to compute_value_size. modify_value is typically ZigZagEncode."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = 0
|
||||
for element in value:
|
||||
result += compute_value_size(modify_value(element))
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += compute_value_size(modify_value(element))
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + compute_value_size(modify_value(value))
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
def _FixedSizer(value_size):
|
||||
"""Like _SimpleSizer except for a fixed-size field. The input is the size
|
||||
of one value."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = len(value) * value_size
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
element_size = value_size + tag_size
|
||||
def RepeatedFieldSize(value):
|
||||
return len(value) * element_size
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
field_size = value_size + tag_size
|
||||
def FieldSize(value):
|
||||
return field_size
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# Here we declare a sizer constructor for each field type. Each "sizer
|
||||
# constructor" is a function that takes (field_number, is_repeated, is_packed)
|
||||
# as parameters and returns a sizer, which in turn takes a field value as
|
||||
# a parameter and returns its encoded size.
|
||||
|
||||
|
||||
Int32Sizer = Int64Sizer = EnumSizer = _SimpleSizer(_SignedVarintSize)
|
||||
|
||||
UInt32Sizer = UInt64Sizer = _SimpleSizer(_VarintSize)
|
||||
|
||||
SInt32Sizer = SInt64Sizer = _ModifiedSizer(
|
||||
_SignedVarintSize, wire_format.ZigZagEncode)
|
||||
|
||||
Fixed32Sizer = SFixed32Sizer = FloatSizer = _FixedSizer(4)
|
||||
Fixed64Sizer = SFixed64Sizer = DoubleSizer = _FixedSizer(8)
|
||||
|
||||
BoolSizer = _FixedSizer(1)
|
||||
|
||||
|
||||
def StringSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a string field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = local_len(element.encode('utf-8'))
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = local_len(value.encode('utf-8'))
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
def BytesSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a bytes field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = local_len(element)
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = local_len(value)
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
def GroupSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a group field."""
|
||||
|
||||
tag_size = _TagSize(field_number) * 2
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += element.ByteSize()
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + value.ByteSize()
|
||||
return FieldSize
|
||||
|
||||
|
||||
def MessageSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a message field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = element.ByteSize()
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = value.ByteSize()
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# MessageSet is special: it needs custom logic to compute its size properly.
|
||||
|
||||
|
||||
def MessageSetItemSizer(field_number):
|
||||
"""Returns a sizer for extensions of MessageSet.
|
||||
|
||||
The message set message looks like this:
|
||||
message MessageSet {
|
||||
repeated group Item = 1 {
|
||||
required int32 type_id = 2;
|
||||
required string message = 3;
|
||||
}
|
||||
}
|
||||
"""
|
||||
static_size = (_TagSize(1) * 2 + _TagSize(2) + _VarintSize(field_number) +
|
||||
_TagSize(3))
|
||||
local_VarintSize = _VarintSize
|
||||
|
||||
def FieldSize(value):
|
||||
l = value.ByteSize()
|
||||
return static_size + local_VarintSize(l) + l
|
||||
|
||||
return FieldSize
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Map is special: it needs custom logic to compute its size properly.
|
||||
|
||||
|
||||
def MapSizer(field_descriptor, is_message_map):
|
||||
"""Returns a sizer for a map field."""
|
||||
|
||||
# Can't look at field_descriptor.message_type._concrete_class because it may
|
||||
# not have been initialized yet.
|
||||
message_type = field_descriptor.message_type
|
||||
message_sizer = MessageSizer(field_descriptor.number, False, False)
|
||||
|
||||
def FieldSize(map_value):
|
||||
total = 0
|
||||
for key in map_value:
|
||||
value = map_value[key]
|
||||
# It's wasteful to create the messages and throw them away one second
|
||||
# later since we'll do the same for the actual encode. But there's not an
|
||||
# obvious way to avoid this within the current design without tons of code
|
||||
# duplication. For message map, value.ByteSize() should be called to
|
||||
# update the status.
|
||||
entry_msg = message_type._concrete_class(key=key, value=value)
|
||||
total += message_sizer(entry_msg)
|
||||
if is_message_map:
|
||||
value.ByteSize()
|
||||
return total
|
||||
|
||||
return FieldSize
|
||||
|
||||
# ====================================================================
|
||||
# Encoders!
|
||||
|
||||
|
||||
def _VarintEncoder():
|
||||
"""Return an encoder for a basic varint value (does not include tag)."""
|
||||
|
||||
local_int2byte = struct.Struct('>B').pack
|
||||
|
||||
def EncodeVarint(write, value, unused_deterministic=None):
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
while value:
|
||||
write(local_int2byte(0x80|bits))
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
return write(local_int2byte(bits))
|
||||
|
||||
return EncodeVarint
|
||||
|
||||
|
||||
def _SignedVarintEncoder():
|
||||
"""Return an encoder for a basic signed varint value (does not include
|
||||
tag)."""
|
||||
|
||||
local_int2byte = struct.Struct('>B').pack
|
||||
|
||||
def EncodeSignedVarint(write, value, unused_deterministic=None):
|
||||
if value < 0:
|
||||
value += (1 << 64)
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
while value:
|
||||
write(local_int2byte(0x80|bits))
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
return write(local_int2byte(bits))
|
||||
|
||||
return EncodeSignedVarint
|
||||
|
||||
|
||||
_EncodeVarint = _VarintEncoder()
|
||||
_EncodeSignedVarint = _SignedVarintEncoder()
|
||||
|
||||
|
||||
def _VarintBytes(value):
|
||||
"""Encode the given integer as a varint and return the bytes. This is only
|
||||
called at startup time so it doesn't need to be fast."""
|
||||
|
||||
pieces = []
|
||||
_EncodeVarint(pieces.append, value, True)
|
||||
return b"".join(pieces)
|
||||
|
||||
|
||||
def TagBytes(field_number, wire_type):
|
||||
"""Encode the given tag and return the bytes. Only called at startup."""
|
||||
|
||||
return bytes(_VarintBytes(wire_format.PackTag(field_number, wire_type)))
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As with sizers (see above), we have a number of common encoder
|
||||
# implementations.
|
||||
|
||||
|
||||
def _SimpleEncoder(wire_type, encode_value, compute_value_size):
|
||||
"""Return a constructor for an encoder for fields of a particular type.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
encode_value: A function which encodes an individual value, e.g.
|
||||
_EncodeVarint().
|
||||
compute_value_size: A function which computes the size of an individual
|
||||
value, e.g. _VarintSize().
|
||||
"""
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
size = 0
|
||||
for element in value:
|
||||
size += compute_value_size(element)
|
||||
local_EncodeVarint(write, size, deterministic)
|
||||
for element in value:
|
||||
encode_value(write, element, deterministic)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
encode_value(write, element, deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
return encode_value(write, value, deterministic)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value):
|
||||
"""Like SimpleEncoder but additionally invokes modify_value on every value
|
||||
before passing it to encode_value. Usually modify_value is ZigZagEncode."""
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
size = 0
|
||||
for element in value:
|
||||
size += compute_value_size(modify_value(element))
|
||||
local_EncodeVarint(write, size, deterministic)
|
||||
for element in value:
|
||||
encode_value(write, modify_value(element), deterministic)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
encode_value(write, modify_value(element), deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
return encode_value(write, modify_value(value), deterministic)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _StructPackEncoder(wire_type, format):
|
||||
"""Return a constructor for an encoder for a fixed-width field.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
format: The format string to pass to struct.pack().
|
||||
"""
|
||||
|
||||
value_size = struct.calcsize(format)
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
local_struct_pack = struct.pack
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value) * value_size, deterministic)
|
||||
for element in value:
|
||||
write(local_struct_pack(format, element))
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
write(local_struct_pack(format, element))
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
return write(local_struct_pack(format, value))
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _FloatingPointEncoder(wire_type, format):
|
||||
"""Return a constructor for an encoder for float fields.
|
||||
|
||||
This is like StructPackEncoder, but catches errors that may be due to
|
||||
passing non-finite floating-point values to struct.pack, and makes a
|
||||
second attempt to encode those values.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
format: The format string to pass to struct.pack().
|
||||
"""
|
||||
|
||||
value_size = struct.calcsize(format)
|
||||
if value_size == 4:
|
||||
def EncodeNonFiniteOrRaise(write, value):
|
||||
# Remember that the serialized form uses little-endian byte order.
|
||||
if value == _POS_INF:
|
||||
write(b'\x00\x00\x80\x7F')
|
||||
elif value == _NEG_INF:
|
||||
write(b'\x00\x00\x80\xFF')
|
||||
elif value != value: # NaN
|
||||
write(b'\x00\x00\xC0\x7F')
|
||||
else:
|
||||
raise
|
||||
elif value_size == 8:
|
||||
def EncodeNonFiniteOrRaise(write, value):
|
||||
if value == _POS_INF:
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F')
|
||||
elif value == _NEG_INF:
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF')
|
||||
elif value != value: # NaN
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F')
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
raise ValueError('Can\'t encode floating-point values that are '
|
||||
'%d bytes long (only 4 or 8)' % value_size)
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
local_struct_pack = struct.pack
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value) * value_size, deterministic)
|
||||
for element in value:
|
||||
# This try/except block is going to be faster than any code that
|
||||
# we could write to check whether element is finite.
|
||||
try:
|
||||
write(local_struct_pack(format, element))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, element)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
try:
|
||||
write(local_struct_pack(format, element))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, element)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
try:
|
||||
write(local_struct_pack(format, value))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, value)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# Here we declare an encoder constructor for each field type. These work
|
||||
# very similarly to sizer constructors, described earlier.
|
||||
|
||||
|
||||
Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize)
|
||||
|
||||
UInt32Encoder = UInt64Encoder = _SimpleEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize)
|
||||
|
||||
SInt32Encoder = SInt64Encoder = _ModifiedEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize,
|
||||
wire_format.ZigZagEncode)
|
||||
|
||||
# Note that Python conveniently guarantees that when using the '<' prefix on
|
||||
# formats, they will also have the same size across all platforms (as opposed
|
||||
# to without the prefix, where their sizes depend on the C compiler's basic
|
||||
# type sizes).
|
||||
Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<I')
|
||||
Fixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<Q')
|
||||
SFixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<i')
|
||||
SFixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<q')
|
||||
FloatEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED32, '<f')
|
||||
DoubleEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED64, '<d')
|
||||
|
||||
|
||||
def BoolEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a boolean field."""
|
||||
|
||||
false_byte = b'\x00'
|
||||
true_byte = b'\x01'
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value), deterministic)
|
||||
for element in value:
|
||||
if element:
|
||||
write(true_byte)
|
||||
else:
|
||||
write(false_byte)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
if element:
|
||||
write(true_byte)
|
||||
else:
|
||||
write(false_byte)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
if value:
|
||||
return write(true_byte)
|
||||
return write(false_byte)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def StringEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a string field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
encoded = element.encode('utf-8')
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(encoded), deterministic)
|
||||
write(encoded)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
encoded = value.encode('utf-8')
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(encoded), deterministic)
|
||||
return write(encoded)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def BytesEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a bytes field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(element), deterministic)
|
||||
write(element)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(value), deterministic)
|
||||
return write(value)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def GroupEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a group field."""
|
||||
|
||||
start_tag = TagBytes(field_number, wire_format.WIRETYPE_START_GROUP)
|
||||
end_tag = TagBytes(field_number, wire_format.WIRETYPE_END_GROUP)
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(start_tag)
|
||||
element._InternalSerialize(write, deterministic)
|
||||
write(end_tag)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(start_tag)
|
||||
value._InternalSerialize(write, deterministic)
|
||||
return write(end_tag)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def MessageEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a message field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag)
|
||||
local_EncodeVarint(write, element.ByteSize(), deterministic)
|
||||
element._InternalSerialize(write, deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag)
|
||||
local_EncodeVarint(write, value.ByteSize(), deterministic)
|
||||
return value._InternalSerialize(write, deterministic)
|
||||
return EncodeField
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As before, MessageSet is special.
|
||||
|
||||
|
||||
def MessageSetItemEncoder(field_number):
|
||||
"""Encoder for extensions of MessageSet.
|
||||
|
||||
The message set message looks like this:
|
||||
message MessageSet {
|
||||
repeated group Item = 1 {
|
||||
required int32 type_id = 2;
|
||||
required string message = 3;
|
||||
}
|
||||
}
|
||||
"""
|
||||
start_bytes = b"".join([
|
||||
TagBytes(1, wire_format.WIRETYPE_START_GROUP),
|
||||
TagBytes(2, wire_format.WIRETYPE_VARINT),
|
||||
_VarintBytes(field_number),
|
||||
TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)])
|
||||
end_bytes = TagBytes(1, wire_format.WIRETYPE_END_GROUP)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(start_bytes)
|
||||
local_EncodeVarint(write, value.ByteSize(), deterministic)
|
||||
value._InternalSerialize(write, deterministic)
|
||||
return write(end_bytes)
|
||||
|
||||
return EncodeField
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As before, Map is special.
|
||||
|
||||
|
||||
def MapEncoder(field_descriptor):
|
||||
"""Encoder for extensions of MessageSet.
|
||||
|
||||
Maps always have a wire format like this:
|
||||
message MapEntry {
|
||||
key_type key = 1;
|
||||
value_type value = 2;
|
||||
}
|
||||
repeated MapEntry map = N;
|
||||
"""
|
||||
# Can't look at field_descriptor.message_type._concrete_class because it may
|
||||
# not have been initialized yet.
|
||||
message_type = field_descriptor.message_type
|
||||
encode_message = MessageEncoder(field_descriptor.number, False, False)
|
||||
|
||||
def EncodeField(write, value, deterministic):
|
||||
value_keys = sorted(value.keys()) if deterministic else value
|
||||
for key in value_keys:
|
||||
entry_msg = message_type._concrete_class(key=key, value=value[key])
|
||||
encode_message(write, entry_msg, deterministic)
|
||||
|
||||
return EncodeField
|
||||
+112
@@ -0,0 +1,112 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""A simple wrapper around enum types to expose utility functions.
|
||||
|
||||
Instances are created as properties with the same name as the enum they wrap
|
||||
on proto classes. For usage, see:
|
||||
reflection_test.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
__author__ = 'rabsatt@google.com (Kevin Rabsatt)'
|
||||
|
||||
|
||||
class EnumTypeWrapper(object):
|
||||
"""A utility for finding the names of enum values."""
|
||||
|
||||
DESCRIPTOR = None
|
||||
|
||||
# This is a type alias, which mypy typing stubs can type as
|
||||
# a genericized parameter constrained to an int, allowing subclasses
|
||||
# to be typed with more constraint in .pyi stubs
|
||||
# Eg.
|
||||
# def MyGeneratedEnum(Message):
|
||||
# ValueType = NewType('ValueType', int)
|
||||
# def Name(self, number: MyGeneratedEnum.ValueType) -> str
|
||||
ValueType = int
|
||||
|
||||
def __init__(self, enum_type):
|
||||
"""Inits EnumTypeWrapper with an EnumDescriptor."""
|
||||
self._enum_type = enum_type
|
||||
self.DESCRIPTOR = enum_type # pylint: disable=invalid-name
|
||||
|
||||
def Name(self, number): # pylint: disable=invalid-name
|
||||
"""Returns a string containing the name of an enum value."""
|
||||
try:
|
||||
return self._enum_type.values_by_number[number].name
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
|
||||
if not isinstance(number, int):
|
||||
raise TypeError(
|
||||
'Enum value for {} must be an int, but got {} {!r}.'.format(
|
||||
self._enum_type.name, type(number), number))
|
||||
else:
|
||||
# repr here to handle the odd case when you pass in a boolean.
|
||||
raise ValueError('Enum {} has no name defined for value {!r}'.format(
|
||||
self._enum_type.name, number))
|
||||
|
||||
def Value(self, name): # pylint: disable=invalid-name
|
||||
"""Returns the value corresponding to the given enum name."""
|
||||
try:
|
||||
return self._enum_type.values_by_name[name].number
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
raise ValueError('Enum {} has no value defined for name {!r}'.format(
|
||||
self._enum_type.name, name))
|
||||
|
||||
def keys(self):
|
||||
"""Return a list of the string names in the enum.
|
||||
|
||||
Returns:
|
||||
A list of strs, in the order they were defined in the .proto file.
|
||||
"""
|
||||
|
||||
return [value_descriptor.name
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def values(self):
|
||||
"""Return a list of the integer values in the enum.
|
||||
|
||||
Returns:
|
||||
A list of ints, in the order they were defined in the .proto file.
|
||||
"""
|
||||
|
||||
return [value_descriptor.number
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def items(self):
|
||||
"""Return a list of the (name, value) pairs of the enum.
|
||||
|
||||
Returns:
|
||||
A list of (str, int) pairs, in the order they were defined
|
||||
in the .proto file.
|
||||
"""
|
||||
return [(value_descriptor.name, value_descriptor.number)
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def __getattr__(self, name):
|
||||
"""Returns the value corresponding to the given enum name."""
|
||||
try:
|
||||
return super(
|
||||
EnumTypeWrapper,
|
||||
self).__getattribute__('_enum_type').values_by_name[name].number
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
raise AttributeError('Enum {} has no value defined for name {!r}'.format(
|
||||
self._enum_type.name, name))
|
||||
|
||||
def __or__(self, other):
|
||||
"""Returns the union type of self and other."""
|
||||
if sys.version_info >= (3, 10):
|
||||
return type(self) | other
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'You may not use | on EnumTypes (or classes) below python 3.10'
|
||||
)
|
||||
+194
@@ -0,0 +1,194 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains _ExtensionDict class to represent extensions.
|
||||
"""
|
||||
|
||||
from google.protobuf.internal import type_checkers
|
||||
from google.protobuf.descriptor import FieldDescriptor
|
||||
|
||||
|
||||
def _VerifyExtensionHandle(message, extension_handle):
|
||||
"""Verify that the given extension handle is valid."""
|
||||
|
||||
if not isinstance(extension_handle, FieldDescriptor):
|
||||
raise KeyError('HasExtension() expects an extension handle, got: %s' %
|
||||
extension_handle)
|
||||
|
||||
if not extension_handle.is_extension:
|
||||
raise KeyError('"%s" is not an extension.' % extension_handle.full_name)
|
||||
|
||||
if not extension_handle.containing_type:
|
||||
raise KeyError('"%s" is missing a containing_type.'
|
||||
% extension_handle.full_name)
|
||||
|
||||
if extension_handle.containing_type is not message.DESCRIPTOR:
|
||||
raise KeyError('Extension "%s" extends message type "%s", but this '
|
||||
'message is of type "%s".' %
|
||||
(extension_handle.full_name,
|
||||
extension_handle.containing_type.full_name,
|
||||
message.DESCRIPTOR.full_name))
|
||||
|
||||
|
||||
# TODO: Unify error handling of "unknown extension" crap.
|
||||
# TODO: Support iteritems()-style iteration over all
|
||||
# extensions with the "has" bits turned on?
|
||||
class _ExtensionDict(object):
|
||||
|
||||
"""Dict-like container for Extension fields on proto instances.
|
||||
|
||||
Note that in all cases we expect extension handles to be
|
||||
FieldDescriptors.
|
||||
"""
|
||||
|
||||
def __init__(self, extended_message):
|
||||
"""
|
||||
Args:
|
||||
extended_message: Message instance for which we are the Extensions dict.
|
||||
"""
|
||||
self._extended_message = extended_message
|
||||
|
||||
def __getitem__(self, extension_handle):
|
||||
"""Returns the current value of the given extension handle."""
|
||||
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
result = self._extended_message._fields.get(extension_handle)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
if extension_handle.is_repeated:
|
||||
result = extension_handle._default_constructor(self._extended_message)
|
||||
elif extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
message_type = extension_handle.message_type
|
||||
if not hasattr(message_type, '_concrete_class'):
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf import message_factory
|
||||
message_factory.GetMessageClass(message_type)
|
||||
if not hasattr(extension_handle.message_type, '_concrete_class'):
|
||||
from google.protobuf import message_factory
|
||||
message_factory.GetMessageClass(extension_handle.message_type)
|
||||
result = extension_handle.message_type._concrete_class()
|
||||
try:
|
||||
result._SetListener(self._extended_message._listener_for_children)
|
||||
except ReferenceError:
|
||||
pass
|
||||
else:
|
||||
# Singular scalar -- just return the default without inserting into the
|
||||
# dict.
|
||||
return extension_handle.default_value
|
||||
|
||||
# Atomically check if another thread has preempted us and, if not, swap
|
||||
# in the new object we just created. If someone has preempted us, we
|
||||
# take that object and discard ours.
|
||||
# WARNING: We are relying on setdefault() being atomic. This is true
|
||||
# in CPython but we haven't investigated others. This warning appears
|
||||
# in several other locations in this file.
|
||||
result = self._extended_message._fields.setdefault(
|
||||
extension_handle, result)
|
||||
|
||||
return result
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, self.__class__):
|
||||
return False
|
||||
|
||||
my_fields = self._extended_message.ListFields()
|
||||
other_fields = other._extended_message.ListFields()
|
||||
|
||||
# Get rid of non-extension fields.
|
||||
my_fields = [field for field in my_fields if field.is_extension]
|
||||
other_fields = [field for field in other_fields if field.is_extension]
|
||||
|
||||
return my_fields == other_fields
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
|
||||
def __len__(self):
|
||||
fields = self._extended_message.ListFields()
|
||||
# Get rid of non-extension fields.
|
||||
extension_fields = [field for field in fields if field[0].is_extension]
|
||||
return len(extension_fields)
|
||||
|
||||
def __hash__(self):
|
||||
raise TypeError('unhashable object')
|
||||
|
||||
# Note that this is only meaningful for non-repeated, scalar extension
|
||||
# fields. Note also that we may have to call _Modified() when we do
|
||||
# successfully set a field this way, to set any necessary "has" bits in the
|
||||
# ancestors of the extended message.
|
||||
def __setitem__(self, extension_handle, value):
|
||||
"""If extension_handle specifies a non-repeated, scalar extension
|
||||
field, sets the value of that field.
|
||||
"""
|
||||
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
if (extension_handle.is_repeated or
|
||||
extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE):
|
||||
raise TypeError(
|
||||
'Cannot assign to extension "%s" because it is a repeated or '
|
||||
'composite type.' % extension_handle.full_name)
|
||||
|
||||
# It's slightly wasteful to lookup the type checker each time,
|
||||
# but we expect this to be a vanishingly uncommon case anyway.
|
||||
type_checker = type_checkers.GetTypeChecker(extension_handle)
|
||||
# pylint: disable=protected-access
|
||||
self._extended_message._fields[extension_handle] = (
|
||||
type_checker.CheckValue(value))
|
||||
self._extended_message._Modified()
|
||||
|
||||
def __delitem__(self, extension_handle):
|
||||
self._extended_message.ClearExtension(extension_handle)
|
||||
|
||||
def _FindExtensionByName(self, name):
|
||||
"""Tries to find a known extension with the specified name.
|
||||
|
||||
Args:
|
||||
name: Extension full name.
|
||||
|
||||
Returns:
|
||||
Extension field descriptor.
|
||||
"""
|
||||
descriptor = self._extended_message.DESCRIPTOR
|
||||
extensions = descriptor.file.pool._extensions_by_name[descriptor]
|
||||
return extensions.get(name, None)
|
||||
|
||||
def _FindExtensionByNumber(self, number):
|
||||
"""Tries to find a known extension with the field number.
|
||||
|
||||
Args:
|
||||
number: Extension field number.
|
||||
|
||||
Returns:
|
||||
Extension field descriptor.
|
||||
"""
|
||||
descriptor = self._extended_message.DESCRIPTOR
|
||||
extensions = descriptor.file.pool._extensions_by_number[descriptor]
|
||||
return extensions.get(number, None)
|
||||
|
||||
def __iter__(self):
|
||||
# Return a generator over the populated extension fields
|
||||
return (f[0] for f in self._extended_message.ListFields()
|
||||
if f[0].is_extension)
|
||||
|
||||
def __contains__(self, extension_handle):
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
if extension_handle not in self._extended_message._fields:
|
||||
return False
|
||||
|
||||
if extension_handle.is_repeated:
|
||||
return bool(self._extended_message._fields.get(extension_handle))
|
||||
|
||||
if extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
value = self._extended_message._fields.get(extension_handle)
|
||||
# pylint: disable=protected-access
|
||||
return value is not None and value._is_present_in_parent
|
||||
|
||||
return True
|
||||
+312
@@ -0,0 +1,312 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains FieldMask class."""
|
||||
|
||||
from google.protobuf.descriptor import FieldDescriptor
|
||||
|
||||
|
||||
class FieldMask(object):
|
||||
"""Class for FieldMask message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts FieldMask to string according to proto3 JSON spec."""
|
||||
camelcase_paths = []
|
||||
for path in self.paths:
|
||||
camelcase_paths.append(_SnakeCaseToCamelCase(path))
|
||||
return ','.join(camelcase_paths)
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Converts string to FieldMask according to proto3 JSON spec."""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('FieldMask JSON value not a string: {!r}'.format(value))
|
||||
self.Clear()
|
||||
if value:
|
||||
for path in value.split(','):
|
||||
self.paths.append(_CamelCaseToSnakeCase(path))
|
||||
|
||||
def IsValidForDescriptor(self, message_descriptor):
|
||||
"""Checks whether the FieldMask is valid for Message Descriptor."""
|
||||
for path in self.paths:
|
||||
if not _IsValidPath(message_descriptor, path):
|
||||
return False
|
||||
return True
|
||||
|
||||
def AllFieldsFromDescriptor(self, message_descriptor):
|
||||
"""Gets all direct fields of Message Descriptor to FieldMask."""
|
||||
self.Clear()
|
||||
for field in message_descriptor.fields:
|
||||
self.paths.append(field.name)
|
||||
|
||||
def CanonicalFormFromMask(self, mask):
|
||||
"""Converts a FieldMask to the canonical form.
|
||||
|
||||
Removes paths that are covered by another path. For example,
|
||||
"foo.bar" is covered by "foo" and will be removed if "foo"
|
||||
is also in the FieldMask. Then sorts all paths in alphabetical order.
|
||||
|
||||
Args:
|
||||
mask: The original FieldMask to be converted.
|
||||
"""
|
||||
tree = _FieldMaskTree(mask)
|
||||
tree.ToFieldMask(self)
|
||||
|
||||
def Union(self, mask1, mask2):
|
||||
"""Merges mask1 and mask2 into this FieldMask."""
|
||||
_CheckFieldMaskMessage(mask1)
|
||||
_CheckFieldMaskMessage(mask2)
|
||||
tree = _FieldMaskTree(mask1)
|
||||
tree.MergeFromFieldMask(mask2)
|
||||
tree.ToFieldMask(self)
|
||||
|
||||
def Intersect(self, mask1, mask2):
|
||||
"""Intersects mask1 and mask2 into this FieldMask."""
|
||||
_CheckFieldMaskMessage(mask1)
|
||||
_CheckFieldMaskMessage(mask2)
|
||||
tree = _FieldMaskTree(mask1)
|
||||
intersection = _FieldMaskTree()
|
||||
for path in mask2.paths:
|
||||
tree.IntersectPath(path, intersection)
|
||||
intersection.ToFieldMask(self)
|
||||
|
||||
def MergeMessage(
|
||||
self, source, destination,
|
||||
replace_message_field=False, replace_repeated_field=False):
|
||||
"""Merges fields specified in FieldMask from source to destination.
|
||||
|
||||
Args:
|
||||
source: Source message.
|
||||
destination: The destination message to be merged into.
|
||||
replace_message_field: Replace message field if True. Merge message
|
||||
field if False.
|
||||
replace_repeated_field: Replace repeated field if True. Append
|
||||
elements of repeated field if False.
|
||||
"""
|
||||
tree = _FieldMaskTree(self)
|
||||
tree.MergeMessage(
|
||||
source, destination, replace_message_field, replace_repeated_field)
|
||||
|
||||
|
||||
def _IsValidPath(message_descriptor, path):
|
||||
"""Checks whether the path is valid for Message Descriptor."""
|
||||
parts = path.split('.')
|
||||
last = parts.pop()
|
||||
for name in parts:
|
||||
field = message_descriptor.fields_by_name.get(name)
|
||||
if (field is None or
|
||||
field.is_repeated or
|
||||
field.type != FieldDescriptor.TYPE_MESSAGE):
|
||||
return False
|
||||
message_descriptor = field.message_type
|
||||
return last in message_descriptor.fields_by_name
|
||||
|
||||
|
||||
def _CheckFieldMaskMessage(message):
|
||||
"""Raises ValueError if message is not a FieldMask."""
|
||||
message_descriptor = message.DESCRIPTOR
|
||||
if (message_descriptor.name != 'FieldMask' or
|
||||
message_descriptor.file.name != 'google/protobuf/field_mask.proto'):
|
||||
raise ValueError('Message {0} is not a FieldMask.'.format(
|
||||
message_descriptor.full_name))
|
||||
|
||||
|
||||
def _SnakeCaseToCamelCase(path_name):
|
||||
"""Converts a path name from snake_case to camelCase."""
|
||||
result = []
|
||||
after_underscore = False
|
||||
for c in path_name:
|
||||
if c.isupper():
|
||||
raise ValueError(
|
||||
'Fail to print FieldMask to Json string: Path name '
|
||||
'{0} must not contain uppercase letters.'.format(path_name))
|
||||
if after_underscore:
|
||||
if c.islower():
|
||||
result.append(c.upper())
|
||||
after_underscore = False
|
||||
else:
|
||||
raise ValueError(
|
||||
'Fail to print FieldMask to Json string: The '
|
||||
'character after a "_" must be a lowercase letter '
|
||||
'in path name {0}.'.format(path_name))
|
||||
elif c == '_':
|
||||
after_underscore = True
|
||||
else:
|
||||
result += c
|
||||
|
||||
if after_underscore:
|
||||
raise ValueError('Fail to print FieldMask to Json string: Trailing "_" '
|
||||
'in path name {0}.'.format(path_name))
|
||||
return ''.join(result)
|
||||
|
||||
|
||||
def _CamelCaseToSnakeCase(path_name):
|
||||
"""Converts a field name from camelCase to snake_case."""
|
||||
result = []
|
||||
for c in path_name:
|
||||
if c == '_':
|
||||
raise ValueError('Fail to parse FieldMask: Path name '
|
||||
'{0} must not contain "_"s.'.format(path_name))
|
||||
if c.isupper():
|
||||
result += '_'
|
||||
result += c.lower()
|
||||
else:
|
||||
result += c
|
||||
return ''.join(result)
|
||||
|
||||
|
||||
class _FieldMaskTree(object):
|
||||
"""Represents a FieldMask in a tree structure.
|
||||
|
||||
For example, given a FieldMask "foo.bar,foo.baz,bar.baz",
|
||||
the FieldMaskTree will be:
|
||||
[_root] -+- foo -+- bar
|
||||
| |
|
||||
| +- baz
|
||||
|
|
||||
+- bar --- baz
|
||||
In the tree, each leaf node represents a field path.
|
||||
"""
|
||||
|
||||
__slots__ = ('_root',)
|
||||
|
||||
def __init__(self, field_mask=None):
|
||||
"""Initializes the tree by FieldMask."""
|
||||
self._root = {}
|
||||
if field_mask:
|
||||
self.MergeFromFieldMask(field_mask)
|
||||
|
||||
def MergeFromFieldMask(self, field_mask):
|
||||
"""Merges a FieldMask to the tree."""
|
||||
for path in field_mask.paths:
|
||||
self.AddPath(path)
|
||||
|
||||
def AddPath(self, path):
|
||||
"""Adds a field path into the tree.
|
||||
|
||||
If the field path to add is a sub-path of an existing field path
|
||||
in the tree (i.e., a leaf node), it means the tree already matches
|
||||
the given path so nothing will be added to the tree. If the path
|
||||
matches an existing non-leaf node in the tree, that non-leaf node
|
||||
will be turned into a leaf node with all its children removed because
|
||||
the path matches all the node's children. Otherwise, a new path will
|
||||
be added.
|
||||
|
||||
Args:
|
||||
path: The field path to add.
|
||||
"""
|
||||
node = self._root
|
||||
for name in path.split('.'):
|
||||
if name not in node:
|
||||
node[name] = {}
|
||||
elif not node[name]:
|
||||
# Pre-existing empty node implies we already have this entire tree.
|
||||
return
|
||||
node = node[name]
|
||||
# Remove any sub-trees we might have had.
|
||||
node.clear()
|
||||
|
||||
def ToFieldMask(self, field_mask):
|
||||
"""Converts the tree to a FieldMask."""
|
||||
field_mask.Clear()
|
||||
_AddFieldPaths(self._root, '', field_mask)
|
||||
|
||||
def IntersectPath(self, path, intersection):
|
||||
"""Calculates the intersection part of a field path with this tree.
|
||||
|
||||
Args:
|
||||
path: The field path to calculates.
|
||||
intersection: The out tree to record the intersection part.
|
||||
"""
|
||||
node = self._root
|
||||
for name in path.split('.'):
|
||||
if name not in node:
|
||||
return
|
||||
elif not node[name]:
|
||||
intersection.AddPath(path)
|
||||
return
|
||||
node = node[name]
|
||||
intersection.AddLeafNodes(path, node)
|
||||
|
||||
def AddLeafNodes(self, prefix, node):
|
||||
"""Adds leaf nodes begin with prefix to this tree."""
|
||||
if not node:
|
||||
self.AddPath(prefix)
|
||||
for name in node:
|
||||
child_path = prefix + '.' + name
|
||||
self.AddLeafNodes(child_path, node[name])
|
||||
|
||||
def MergeMessage(
|
||||
self, source, destination,
|
||||
replace_message, replace_repeated):
|
||||
"""Merge all fields specified by this tree from source to destination."""
|
||||
_MergeMessage(
|
||||
self._root, source, destination, replace_message, replace_repeated)
|
||||
|
||||
|
||||
def _StrConvert(value):
|
||||
"""Converts value to str if it is not."""
|
||||
# This file is imported by c extension and some methods like ClearField
|
||||
# requires string for the field name. py2/py3 has different text
|
||||
# type and may use unicode.
|
||||
if not isinstance(value, str):
|
||||
return value.encode('utf-8')
|
||||
return value
|
||||
|
||||
|
||||
def _MergeMessage(
|
||||
node, source, destination, replace_message, replace_repeated):
|
||||
"""Merge all fields specified by a sub-tree from source to destination."""
|
||||
source_descriptor = source.DESCRIPTOR
|
||||
for name in node:
|
||||
child = node[name]
|
||||
field = source_descriptor.fields_by_name[name]
|
||||
if field is None:
|
||||
raise ValueError('Error: Can\'t find field {0} in message {1}.'.format(
|
||||
name, source_descriptor.full_name))
|
||||
if child:
|
||||
# Sub-paths are only allowed for singular message fields.
|
||||
if (field.is_repeated or
|
||||
field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE):
|
||||
raise ValueError('Error: Field {0} in message {1} is not a singular '
|
||||
'message field and cannot have sub-fields.'.format(
|
||||
name, source_descriptor.full_name))
|
||||
if source.HasField(name):
|
||||
_MergeMessage(
|
||||
child, getattr(source, name), getattr(destination, name),
|
||||
replace_message, replace_repeated)
|
||||
continue
|
||||
if field.is_repeated:
|
||||
if replace_repeated:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
repeated_source = getattr(source, name)
|
||||
repeated_destination = getattr(destination, name)
|
||||
repeated_destination.MergeFrom(repeated_source)
|
||||
else:
|
||||
if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
if replace_message:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
if source.HasField(name):
|
||||
getattr(destination, name).MergeFrom(getattr(source, name))
|
||||
elif not field.has_presence or source.HasField(name):
|
||||
setattr(destination, name, getattr(source, name))
|
||||
else:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
|
||||
|
||||
def _AddFieldPaths(node, prefix, field_mask):
|
||||
"""Adds the field paths descended from node to field_mask."""
|
||||
if not node and prefix:
|
||||
field_mask.paths.append(prefix)
|
||||
return
|
||||
for name in sorted(node):
|
||||
if prefix:
|
||||
child_path = prefix + '.' + name
|
||||
else:
|
||||
child_path = name
|
||||
_AddFieldPaths(node[name], child_path, field_mask)
|
||||
+55
@@ -0,0 +1,55 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Defines a listener interface for observing certain
|
||||
state transitions on Message objects.
|
||||
|
||||
Also defines a null implementation of this interface.
|
||||
"""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
|
||||
class MessageListener(object):
|
||||
|
||||
"""Listens for modifications made to a message. Meant to be registered via
|
||||
Message._SetListener().
|
||||
|
||||
Attributes:
|
||||
dirty: If True, then calling Modified() would be a no-op. This can be
|
||||
used to avoid these calls entirely in the common case.
|
||||
"""
|
||||
|
||||
def Modified(self):
|
||||
"""Called every time the message is modified in such a way that the parent
|
||||
message may need to be updated. This currently means either:
|
||||
(a) The message was modified for the first time, so the parent message
|
||||
should henceforth mark the message as present.
|
||||
(b) The message's cached byte size became dirty -- i.e. the message was
|
||||
modified for the first time after a previous call to ByteSize().
|
||||
Therefore the parent should also mark its byte size as dirty.
|
||||
Note that (a) implies (b), since new objects start out with a client cached
|
||||
size (zero). However, we document (a) explicitly because it is important.
|
||||
|
||||
Modified() will *only* be called in response to one of these two events --
|
||||
not every time the sub-message is modified.
|
||||
|
||||
Note that if the listener's |dirty| attribute is true, then calling
|
||||
Modified at the moment would be a no-op, so it can be skipped. Performance-
|
||||
sensitive callers should check this attribute directly before calling since
|
||||
it will be true most of the time.
|
||||
"""
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class NullMessageListener(object):
|
||||
|
||||
"""No-op MessageListener implementation."""
|
||||
|
||||
def Modified(self):
|
||||
pass
|
||||
Executable
+5
@@ -0,0 +1,5 @@
|
||||
"""
|
||||
This file contains the serialized FeatureSetDefaults object corresponding to
|
||||
the Pure Python runtime. This is used for feature resolution under Editions.
|
||||
"""
|
||||
_PROTOBUF_INTERNAL_PYTHON_EDITION_DEFAULTS = b"\n\027\030\204\007\"\000*\020\010\001\020\002\030\002 \003(\0010\0028\002@\001\n\027\030\347\007\"\000*\020\010\002\020\001\030\001 \002(\0010\0018\002@\001\n\027\030\350\007\"\014\010\001\020\001\030\001 \002(\0010\001*\0048\002@\001\n\027\030\351\007\"\020\010\001\020\001\030\001 \002(\0010\0018\001@\002*\000 \346\007(\351\007"
|
||||
+1599
File diff suppressed because it is too large
Load Diff
+128
@@ -0,0 +1,128 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""A subclass of unittest.TestCase which checks for reference leaks.
|
||||
|
||||
To use:
|
||||
- Use testing_refleak.BaseTestCase instead of unittest.TestCase
|
||||
- Configure and compile Python with --with-pydebug
|
||||
|
||||
If sys.gettotalrefcount() is not available (because Python was built without
|
||||
the Py_DEBUG option), then this module is a no-op and tests will run normally.
|
||||
"""
|
||||
|
||||
import copyreg
|
||||
import gc
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
|
||||
class LocalTestResult(unittest.TestResult):
|
||||
"""A TestResult which forwards events to a parent object, except for Skips."""
|
||||
|
||||
def __init__(self, parent_result):
|
||||
unittest.TestResult.__init__(self)
|
||||
self.parent_result = parent_result
|
||||
|
||||
def addError(self, test, error):
|
||||
self.parent_result.addError(test, error)
|
||||
|
||||
def addFailure(self, test, error):
|
||||
self.parent_result.addFailure(test, error)
|
||||
|
||||
def addSkip(self, test, reason):
|
||||
pass
|
||||
|
||||
def addDuration(self, test, duration):
|
||||
pass
|
||||
|
||||
|
||||
class ReferenceLeakCheckerMixin(object):
|
||||
"""A mixin class for TestCase, which checks reference counts."""
|
||||
|
||||
NB_RUNS = 3
|
||||
|
||||
def run(self, result=None):
|
||||
testMethod = getattr(self, self._testMethodName)
|
||||
expecting_failure_method = getattr(testMethod, "__unittest_expecting_failure__", False)
|
||||
expecting_failure_class = getattr(self, "__unittest_expecting_failure__", False)
|
||||
if expecting_failure_class or expecting_failure_method:
|
||||
return
|
||||
|
||||
# python_message.py registers all Message classes to some pickle global
|
||||
# registry, which makes the classes immortal.
|
||||
# We save a copy of this registry, and reset it before we could references.
|
||||
self._saved_pickle_registry = copyreg.dispatch_table.copy()
|
||||
|
||||
# Run the test twice, to warm up the instance attributes.
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=result)
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=result)
|
||||
|
||||
local_result = LocalTestResult(result)
|
||||
num_flakes = 0
|
||||
refcount_deltas = []
|
||||
|
||||
# Observe the refcount, then create oldrefcount which actually makes the
|
||||
# refcount 1 higher than the recorded value immediately
|
||||
oldrefcount = self._getRefcounts()
|
||||
while len(refcount_deltas) < self.NB_RUNS:
|
||||
oldrefcount = self._getRefcounts()
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=local_result)
|
||||
newrefcount = self._getRefcounts()
|
||||
# If the GC was able to collect some objects after the call to run() that
|
||||
# it could not collect before the call, then the counts won't match.
|
||||
if newrefcount < oldrefcount and num_flakes < 2:
|
||||
# This result is (probably) a flake -- garbage collectors aren't very
|
||||
# predictable, but a lower ending refcount is the opposite of the
|
||||
# failure we are testing for. If the result is repeatable, then we will
|
||||
# eventually report it, but not after trying to eliminate it.
|
||||
num_flakes += 1
|
||||
continue
|
||||
num_flakes = 0
|
||||
refcount_deltas.append(newrefcount - oldrefcount)
|
||||
print(refcount_deltas, self)
|
||||
|
||||
try:
|
||||
self.assertEqual(refcount_deltas, [0] * self.NB_RUNS)
|
||||
except Exception: # pylint: disable=broad-except
|
||||
result.addError(self, sys.exc_info())
|
||||
|
||||
def _getRefcounts(self):
|
||||
if hasattr(sys, "_clear_internal_caches"): # Since 3.13
|
||||
sys._clear_internal_caches() # pylint: disable=protected-access
|
||||
else:
|
||||
sys._clear_type_cache() # pylint: disable=protected-access
|
||||
copyreg.dispatch_table.clear()
|
||||
copyreg.dispatch_table.update(self._saved_pickle_registry)
|
||||
# It is sometimes necessary to gc.collect() multiple times, to ensure
|
||||
# that all objects can be collected.
|
||||
gc.collect()
|
||||
gc.collect()
|
||||
gc.collect()
|
||||
return sys.gettotalrefcount()
|
||||
|
||||
|
||||
if hasattr(sys, 'gettotalrefcount'):
|
||||
|
||||
def TestCase(test_class):
|
||||
new_bases = (ReferenceLeakCheckerMixin,) + test_class.__bases__
|
||||
new_class = type(test_class)(
|
||||
test_class.__name__, new_bases, dict(test_class.__dict__))
|
||||
return new_class
|
||||
SkipReferenceLeakChecker = unittest.skip
|
||||
|
||||
else:
|
||||
# When PyDEBUG is not enabled, run the tests normally.
|
||||
|
||||
def TestCase(test_class):
|
||||
return test_class
|
||||
|
||||
def SkipReferenceLeakChecker(reason):
|
||||
del reason # Don't skip, so don't need a reason.
|
||||
def Same(func):
|
||||
return func
|
||||
return Same
|
||||
+455
@@ -0,0 +1,455 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides type checking routines.
|
||||
|
||||
This module defines type checking utilities in the forms of dictionaries:
|
||||
|
||||
VALUE_CHECKERS: A dictionary of field types and a value validation object.
|
||||
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
|
||||
function.
|
||||
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
|
||||
function.
|
||||
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
|
||||
corresponding wire types.
|
||||
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
|
||||
function.
|
||||
"""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
import numbers
|
||||
import struct
|
||||
import warnings
|
||||
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf.internal import decoder
|
||||
from google.protobuf.internal import encoder
|
||||
from google.protobuf.internal import wire_format
|
||||
|
||||
_FieldDescriptor = descriptor.FieldDescriptor
|
||||
# TODO: Remove this warning count after 34.0
|
||||
# Assign bool to int/enum warnings will print 100 times at most which should
|
||||
# be enough for users to notice and do not cause timeout.
|
||||
_BoolWarningCount = 100
|
||||
|
||||
def TruncateToFourByteFloat(original):
|
||||
return struct.unpack('<f', struct.pack('<f', original))[0]
|
||||
|
||||
|
||||
def ToShortestFloat(original):
|
||||
"""Returns the shortest float that has same value in wire."""
|
||||
# All 4 byte floats have between 6 and 9 significant digits, so we
|
||||
# start with 6 as the lower bound.
|
||||
# It has to be iterative because use '.9g' directly can not get rid
|
||||
# of the noises for most values. For example if set a float_field=0.9
|
||||
# use '.9g' will print 0.899999976.
|
||||
precision = 6
|
||||
rounded = float('{0:.{1}g}'.format(original, precision))
|
||||
while TruncateToFourByteFloat(rounded) != original:
|
||||
precision += 1
|
||||
rounded = float('{0:.{1}g}'.format(original, precision))
|
||||
return rounded
|
||||
|
||||
|
||||
def GetTypeChecker(field):
|
||||
"""Returns a type checker for a message field of the specified types.
|
||||
|
||||
Args:
|
||||
field: FieldDescriptor object for this field.
|
||||
|
||||
Returns:
|
||||
An instance of TypeChecker which can be used to verify the types
|
||||
of values assigned to a field of the specified type.
|
||||
"""
|
||||
if (field.cpp_type == _FieldDescriptor.CPPTYPE_STRING and
|
||||
field.type == _FieldDescriptor.TYPE_STRING):
|
||||
return UnicodeValueChecker()
|
||||
if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM:
|
||||
if field.enum_type.is_closed:
|
||||
return EnumValueChecker(field.enum_type)
|
||||
else:
|
||||
# When open enums are supported, any int32 can be assigned.
|
||||
return _VALUE_CHECKERS[_FieldDescriptor.CPPTYPE_INT32]
|
||||
return _VALUE_CHECKERS[field.cpp_type]
|
||||
|
||||
|
||||
# None of the typecheckers below make any attempt to guard against people
|
||||
# subclassing builtin types and doing weird things. We're not trying to
|
||||
# protect against malicious clients here, just people accidentally shooting
|
||||
# themselves in the foot in obvious ways.
|
||||
class TypeChecker(object):
|
||||
|
||||
"""Type checker used to catch type errors as early as possible
|
||||
when the client is setting scalar fields in protocol messages.
|
||||
"""
|
||||
|
||||
def __init__(self, *acceptable_types):
|
||||
self._acceptable_types = acceptable_types
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Type check the provided value and return it.
|
||||
|
||||
The returned value might have been normalized to another type.
|
||||
"""
|
||||
if not isinstance(proposed_value, self._acceptable_types):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), self._acceptable_types))
|
||||
raise TypeError(message)
|
||||
return proposed_value
|
||||
|
||||
|
||||
class TypeCheckerWithDefault(TypeChecker):
|
||||
|
||||
def __init__(self, default_value, *acceptable_types):
|
||||
TypeChecker.__init__(self, *acceptable_types)
|
||||
self._default_value = default_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return self._default_value
|
||||
|
||||
|
||||
class BoolValueChecker(object):
|
||||
"""Type checker used for bool fields."""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
if not hasattr(proposed_value, '__index__'):
|
||||
# Under NumPy 2.3, numpy.bool does not have an __index__ method.
|
||||
if (type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'bool'):
|
||||
return bool(proposed_value)
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bool, int)))
|
||||
raise TypeError(message)
|
||||
|
||||
if (type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bool, int)))
|
||||
raise TypeError(message)
|
||||
|
||||
return bool(proposed_value)
|
||||
|
||||
def DefaultValue(self):
|
||||
return False
|
||||
|
||||
|
||||
# IntValueChecker and its subclasses perform integer type-checks
|
||||
# and bounds-checks.
|
||||
class IntValueChecker(object):
|
||||
|
||||
"""Checker used for integer fields. Performs type-check and range check."""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
global _BoolWarningCount
|
||||
if type(proposed_value) == bool and _BoolWarningCount > 0:
|
||||
_BoolWarningCount -= 1
|
||||
message = (
|
||||
'%.1024r has type %s, but expected one of: %s. This warning '
|
||||
'will turn into error in 7.34.0, please fix it before that.'
|
||||
% (
|
||||
proposed_value,
|
||||
type(proposed_value),
|
||||
(int,),
|
||||
)
|
||||
)
|
||||
# TODO: Raise errors in 2026 Q1 release
|
||||
warnings.warn(message)
|
||||
|
||||
if not hasattr(proposed_value, '__index__') or (
|
||||
type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (int,)))
|
||||
raise TypeError(message)
|
||||
|
||||
if not self._MIN <= int(proposed_value) <= self._MAX:
|
||||
raise ValueError('Value out of range: %d' % proposed_value)
|
||||
# We force all values to int to make alternate implementations where the
|
||||
# distinction is more significant (e.g. the C++ implementation) simpler.
|
||||
proposed_value = int(proposed_value)
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return 0
|
||||
|
||||
|
||||
class EnumValueChecker(object):
|
||||
|
||||
"""Checker used for enum fields. Performs type-check and range check."""
|
||||
|
||||
def __init__(self, enum_type):
|
||||
self._enum_type = enum_type
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
global _BoolWarningCount
|
||||
if type(proposed_value) == bool and _BoolWarningCount > 0:
|
||||
_BoolWarningCount -= 1
|
||||
message = (
|
||||
'%.1024r has type %s, but expected one of: %s. This warning '
|
||||
'will turn into error in 7.34.0, please fix it before that.'
|
||||
% (
|
||||
proposed_value,
|
||||
type(proposed_value),
|
||||
(int,),
|
||||
)
|
||||
)
|
||||
# TODO: Raise errors in 2026 Q1 release
|
||||
warnings.warn(message)
|
||||
if not isinstance(proposed_value, numbers.Integral):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (int,)))
|
||||
raise TypeError(message)
|
||||
if int(proposed_value) not in self._enum_type.values_by_number:
|
||||
raise ValueError('Unknown enum value: %d' % proposed_value)
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return self._enum_type.values[0].number
|
||||
|
||||
|
||||
class UnicodeValueChecker(object):
|
||||
|
||||
"""Checker used for string fields.
|
||||
|
||||
Always returns a unicode value, even if the input is of type str.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
if not isinstance(proposed_value, (bytes, str)):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bytes, str)))
|
||||
raise TypeError(message)
|
||||
|
||||
# If the value is of type 'bytes' make sure that it is valid UTF-8 data.
|
||||
if isinstance(proposed_value, bytes):
|
||||
try:
|
||||
proposed_value = proposed_value.decode('utf-8')
|
||||
except UnicodeDecodeError:
|
||||
raise ValueError('%.1024r has type bytes, but isn\'t valid UTF-8 '
|
||||
'encoding. Non-UTF-8 strings must be converted to '
|
||||
'unicode objects before being added.' %
|
||||
(proposed_value))
|
||||
else:
|
||||
try:
|
||||
proposed_value.encode('utf8')
|
||||
except UnicodeEncodeError:
|
||||
raise ValueError('%.1024r isn\'t a valid unicode string and '
|
||||
'can\'t be encoded in UTF-8.'%
|
||||
(proposed_value))
|
||||
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return u""
|
||||
|
||||
|
||||
class Int32ValueChecker(IntValueChecker):
|
||||
# We're sure to use ints instead of longs here since comparison may be more
|
||||
# efficient.
|
||||
_MIN = -2147483648
|
||||
_MAX = 2147483647
|
||||
|
||||
|
||||
class Uint32ValueChecker(IntValueChecker):
|
||||
_MIN = 0
|
||||
_MAX = (1 << 32) - 1
|
||||
|
||||
|
||||
class Int64ValueChecker(IntValueChecker):
|
||||
_MIN = -(1 << 63)
|
||||
_MAX = (1 << 63) - 1
|
||||
|
||||
|
||||
class Uint64ValueChecker(IntValueChecker):
|
||||
_MIN = 0
|
||||
_MAX = (1 << 64) - 1
|
||||
|
||||
|
||||
# The max 4 bytes float is about 3.4028234663852886e+38
|
||||
_FLOAT_MAX = float.fromhex('0x1.fffffep+127')
|
||||
_FLOAT_MIN = -_FLOAT_MAX
|
||||
_MAX_FLOAT_AS_DOUBLE_ROUNDED = 3.4028235677973366e38
|
||||
_INF = float('inf')
|
||||
_NEG_INF = float('-inf')
|
||||
|
||||
|
||||
class DoubleValueChecker(object):
|
||||
"""Checker used for double fields.
|
||||
|
||||
Performs type-check and range check.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Check and convert proposed_value to float."""
|
||||
if (not hasattr(proposed_value, '__float__') and
|
||||
not hasattr(proposed_value, '__index__')) or (
|
||||
type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: int, float' %
|
||||
(proposed_value, type(proposed_value)))
|
||||
raise TypeError(message)
|
||||
return float(proposed_value)
|
||||
|
||||
def DefaultValue(self):
|
||||
return 0.0
|
||||
|
||||
|
||||
class FloatValueChecker(DoubleValueChecker):
|
||||
"""Checker used for float fields.
|
||||
|
||||
Performs type-check and range check.
|
||||
|
||||
Values exceeding a 32-bit float will be converted to inf/-inf.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Check and convert proposed_value to float."""
|
||||
converted_value = super().CheckValue(proposed_value)
|
||||
# This inf rounding matches the C++ proto SafeDoubleToFloat logic.
|
||||
if converted_value > _FLOAT_MAX:
|
||||
if converted_value <= _MAX_FLOAT_AS_DOUBLE_ROUNDED:
|
||||
return _FLOAT_MAX
|
||||
return _INF
|
||||
if converted_value < _FLOAT_MIN:
|
||||
if converted_value >= -_MAX_FLOAT_AS_DOUBLE_ROUNDED:
|
||||
return _FLOAT_MIN
|
||||
return _NEG_INF
|
||||
|
||||
return TruncateToFourByteFloat(converted_value)
|
||||
|
||||
# Type-checkers for all scalar CPPTYPEs.
|
||||
_VALUE_CHECKERS = {
|
||||
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_DOUBLE: DoubleValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_BOOL: BoolValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes),
|
||||
}
|
||||
|
||||
|
||||
# Map from field type to a function F, such that F(field_num, value)
|
||||
# gives the total byte size for a value of the given type. This
|
||||
# byte size includes tag information and any other additional space
|
||||
# associated with serializing "value".
|
||||
TYPE_TO_BYTE_SIZE_FN = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
|
||||
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
|
||||
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
|
||||
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
|
||||
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
|
||||
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
|
||||
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
|
||||
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
|
||||
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
|
||||
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
|
||||
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
|
||||
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
|
||||
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
|
||||
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
|
||||
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
|
||||
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
|
||||
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
|
||||
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
|
||||
}
|
||||
|
||||
|
||||
# Maps from field types to encoder constructors.
|
||||
TYPE_TO_ENCODER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder,
|
||||
_FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder,
|
||||
_FieldDescriptor.TYPE_INT64: encoder.Int64Encoder,
|
||||
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder,
|
||||
_FieldDescriptor.TYPE_INT32: encoder.Int32Encoder,
|
||||
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder,
|
||||
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder,
|
||||
_FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder,
|
||||
_FieldDescriptor.TYPE_STRING: encoder.StringEncoder,
|
||||
_FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder,
|
||||
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder,
|
||||
_FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder,
|
||||
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder,
|
||||
_FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder,
|
||||
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder,
|
||||
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder,
|
||||
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder,
|
||||
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder,
|
||||
}
|
||||
|
||||
|
||||
# Maps from field types to sizer constructors.
|
||||
TYPE_TO_SIZER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer,
|
||||
_FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer,
|
||||
_FieldDescriptor.TYPE_INT64: encoder.Int64Sizer,
|
||||
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer,
|
||||
_FieldDescriptor.TYPE_INT32: encoder.Int32Sizer,
|
||||
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer,
|
||||
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer,
|
||||
_FieldDescriptor.TYPE_BOOL: encoder.BoolSizer,
|
||||
_FieldDescriptor.TYPE_STRING: encoder.StringSizer,
|
||||
_FieldDescriptor.TYPE_GROUP: encoder.GroupSizer,
|
||||
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer,
|
||||
_FieldDescriptor.TYPE_BYTES: encoder.BytesSizer,
|
||||
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer,
|
||||
_FieldDescriptor.TYPE_ENUM: encoder.EnumSizer,
|
||||
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer,
|
||||
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer,
|
||||
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer,
|
||||
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer,
|
||||
}
|
||||
|
||||
|
||||
# Maps from field type to a decoder constructor.
|
||||
TYPE_TO_DECODER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder,
|
||||
_FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder,
|
||||
_FieldDescriptor.TYPE_INT64: decoder.Int64Decoder,
|
||||
_FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder,
|
||||
_FieldDescriptor.TYPE_INT32: decoder.Int32Decoder,
|
||||
_FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder,
|
||||
_FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder,
|
||||
_FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder,
|
||||
_FieldDescriptor.TYPE_STRING: decoder.StringDecoder,
|
||||
_FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder,
|
||||
_FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder,
|
||||
_FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder,
|
||||
_FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder,
|
||||
_FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder,
|
||||
_FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder,
|
||||
_FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder,
|
||||
_FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder,
|
||||
_FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder,
|
||||
}
|
||||
|
||||
# Maps from field type to expected wiretype.
|
||||
FIELD_TYPE_TO_WIRE_TYPE = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_STRING:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
|
||||
_FieldDescriptor.TYPE_MESSAGE:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_BYTES:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
|
||||
}
|
||||
+695
@@ -0,0 +1,695 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains well known classes.
|
||||
|
||||
This files defines well known classes which need extra maintenance including:
|
||||
- Any
|
||||
- Duration
|
||||
- FieldMask
|
||||
- Struct
|
||||
- Timestamp
|
||||
"""
|
||||
|
||||
__author__ = 'jieluo@google.com (Jie Luo)'
|
||||
|
||||
import calendar
|
||||
import collections.abc
|
||||
import datetime
|
||||
from typing import Union
|
||||
import warnings
|
||||
from google.protobuf.internal import field_mask
|
||||
|
||||
FieldMask = field_mask.FieldMask
|
||||
|
||||
_TIMESTAMPFORMAT = '%Y-%m-%dT%H:%M:%S'
|
||||
_NANOS_PER_SECOND = 1000000000
|
||||
_NANOS_PER_MILLISECOND = 1000000
|
||||
_NANOS_PER_MICROSECOND = 1000
|
||||
_MILLIS_PER_SECOND = 1000
|
||||
_MICROS_PER_SECOND = 1000000
|
||||
_SECONDS_PER_DAY = 24 * 3600
|
||||
_DURATION_SECONDS_MAX = 315576000000
|
||||
_TIMESTAMP_SECONDS_MIN = -62135596800
|
||||
_TIMESTAMP_SECONDS_MAX = 253402300799
|
||||
|
||||
_EPOCH_DATETIME_NAIVE = datetime.datetime(1970, 1, 1, tzinfo=None)
|
||||
_EPOCH_DATETIME_AWARE = _EPOCH_DATETIME_NAIVE.replace(
|
||||
tzinfo=datetime.timezone.utc
|
||||
)
|
||||
|
||||
|
||||
class Any(object):
|
||||
"""Class for Any Message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def Pack(
|
||||
self, msg, type_url_prefix='type.googleapis.com/', deterministic=None
|
||||
):
|
||||
"""Packs the specified message into current Any message."""
|
||||
if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/':
|
||||
self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
|
||||
else:
|
||||
self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
|
||||
self.value = msg.SerializeToString(deterministic=deterministic)
|
||||
|
||||
def Unpack(self, msg):
|
||||
"""Unpacks the current Any message into specified message."""
|
||||
descriptor = msg.DESCRIPTOR
|
||||
if not self.Is(descriptor):
|
||||
return False
|
||||
msg.ParseFromString(self.value)
|
||||
return True
|
||||
|
||||
def TypeName(self):
|
||||
"""Returns the protobuf type name of the inner message."""
|
||||
# Only last part is to be used: b/25630112
|
||||
return self.type_url.rpartition('/')[2]
|
||||
|
||||
def Is(self, descriptor):
|
||||
"""Checks if this Any represents the given protobuf type."""
|
||||
return '/' in self.type_url and self.TypeName() == descriptor.full_name
|
||||
|
||||
|
||||
class Timestamp(object):
|
||||
"""Class for Timestamp message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts Timestamp to RFC 3339 date string format.
|
||||
|
||||
Returns:
|
||||
A string converted from timestamp. The string is always Z-normalized
|
||||
and uses 3, 6 or 9 fractional digits as required to represent the
|
||||
exact time. Example of the return format: '1972-01-01T10:00:20.021Z'
|
||||
"""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
nanos = self.nanos
|
||||
seconds = self.seconds % _SECONDS_PER_DAY
|
||||
days = (self.seconds - seconds) // _SECONDS_PER_DAY
|
||||
dt = datetime.datetime(1970, 1, 1) + datetime.timedelta(days, seconds)
|
||||
|
||||
result = dt.isoformat()
|
||||
if (nanos % 1e9) == 0:
|
||||
# If there are 0 fractional digits, the fractional
|
||||
# point '.' should be omitted when serializing.
|
||||
return result + 'Z'
|
||||
if (nanos % 1e6) == 0:
|
||||
# Serialize 3 fractional digits.
|
||||
return result + '.%03dZ' % (nanos / 1e6)
|
||||
if (nanos % 1e3) == 0:
|
||||
# Serialize 6 fractional digits.
|
||||
return result + '.%06dZ' % (nanos / 1e3)
|
||||
# Serialize 9 fractional digits.
|
||||
return result + '.%09dZ' % nanos
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Parse a RFC 3339 date string format to Timestamp.
|
||||
|
||||
Args:
|
||||
value: A date string. Any fractional digits (or none) and any offset are
|
||||
accepted as long as they fit into nano-seconds precision. Example of
|
||||
accepted format: '1972-01-01T10:00:20.021-05:00'
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('Timestamp JSON value not a string: {!r}'.format(value))
|
||||
timezone_offset = value.find('Z')
|
||||
if timezone_offset == -1:
|
||||
timezone_offset = value.find('+')
|
||||
if timezone_offset == -1:
|
||||
timezone_offset = value.rfind('-')
|
||||
if timezone_offset == -1:
|
||||
raise ValueError(
|
||||
'Failed to parse timestamp: missing valid timezone offset.'
|
||||
)
|
||||
time_value = value[0:timezone_offset]
|
||||
# Parse datetime and nanos.
|
||||
point_position = time_value.find('.')
|
||||
if point_position == -1:
|
||||
second_value = time_value
|
||||
nano_value = ''
|
||||
else:
|
||||
second_value = time_value[:point_position]
|
||||
nano_value = time_value[point_position + 1 :]
|
||||
if 't' in second_value:
|
||||
raise ValueError(
|
||||
"time data '{0}' does not match format '%Y-%m-%dT%H:%M:%S', "
|
||||
"lowercase 't' is not accepted".format(second_value)
|
||||
)
|
||||
date_object = datetime.datetime.strptime(second_value, _TIMESTAMPFORMAT)
|
||||
td = date_object - datetime.datetime(1970, 1, 1)
|
||||
seconds = td.seconds + td.days * _SECONDS_PER_DAY
|
||||
if len(nano_value) > 9:
|
||||
raise ValueError(
|
||||
'Failed to parse Timestamp: nanos {0} more than '
|
||||
'9 fractional digits.'.format(nano_value)
|
||||
)
|
||||
if nano_value:
|
||||
nanos = round(float('0.' + nano_value) * 1e9)
|
||||
else:
|
||||
nanos = 0
|
||||
# Parse timezone offsets.
|
||||
if value[timezone_offset] == 'Z':
|
||||
if len(value) != timezone_offset + 1:
|
||||
raise ValueError(
|
||||
'Failed to parse timestamp: invalid trailing data {0}.'.format(
|
||||
value
|
||||
)
|
||||
)
|
||||
else:
|
||||
timezone = value[timezone_offset:]
|
||||
pos = timezone.find(':')
|
||||
if pos == -1:
|
||||
raise ValueError('Invalid timezone offset value: {0}.'.format(timezone))
|
||||
if timezone[0] == '+':
|
||||
seconds -= (int(timezone[1:pos]) * 60 + int(timezone[pos + 1 :])) * 60
|
||||
else:
|
||||
seconds += (int(timezone[1:pos]) * 60 + int(timezone[pos + 1 :])) * 60
|
||||
# Set seconds and nanos
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = int(seconds)
|
||||
self.nanos = int(nanos)
|
||||
|
||||
def GetCurrentTime(self):
|
||||
"""Get the current UTC into Timestamp."""
|
||||
self.FromDatetime(datetime.datetime.now(tz=datetime.timezone.utc))
|
||||
|
||||
def ToNanoseconds(self):
|
||||
"""Converts Timestamp to nanoseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return self.seconds * _NANOS_PER_SECOND + self.nanos
|
||||
|
||||
def ToMicroseconds(self):
|
||||
"""Converts Timestamp to microseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return (
|
||||
self.seconds * _MICROS_PER_SECOND + self.nanos // _NANOS_PER_MICROSECOND
|
||||
)
|
||||
|
||||
def ToMilliseconds(self):
|
||||
"""Converts Timestamp to milliseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return (
|
||||
self.seconds * _MILLIS_PER_SECOND + self.nanos // _NANOS_PER_MILLISECOND
|
||||
)
|
||||
|
||||
def ToSeconds(self):
|
||||
"""Converts Timestamp to seconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return self.seconds
|
||||
|
||||
def FromNanoseconds(self, nanos):
|
||||
"""Converts nanoseconds since epoch to Timestamp."""
|
||||
seconds = nanos // _NANOS_PER_SECOND
|
||||
nanos = nanos % _NANOS_PER_SECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromMicroseconds(self, micros):
|
||||
"""Converts microseconds since epoch to Timestamp."""
|
||||
seconds = micros // _MICROS_PER_SECOND
|
||||
nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromMilliseconds(self, millis):
|
||||
"""Converts milliseconds since epoch to Timestamp."""
|
||||
seconds = millis // _MILLIS_PER_SECOND
|
||||
nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromSeconds(self, seconds):
|
||||
"""Converts seconds since epoch to Timestamp."""
|
||||
_CheckTimestampValid(seconds, 0)
|
||||
self.seconds = seconds
|
||||
self.nanos = 0
|
||||
|
||||
def ToDatetime(self, tzinfo=None):
|
||||
"""Converts Timestamp to a datetime.
|
||||
|
||||
Args:
|
||||
tzinfo: A datetime.tzinfo subclass; defaults to None.
|
||||
|
||||
Returns:
|
||||
If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone
|
||||
information, i.e. not aware that it's UTC).
|
||||
|
||||
Otherwise, returns a timezone-aware datetime in the input timezone.
|
||||
"""
|
||||
# Using datetime.fromtimestamp for this would avoid constructing an extra
|
||||
# timedelta object and possibly an extra datetime. Unfortunately, that has
|
||||
# the disadvantage of not handling the full precision (on all platforms, see
|
||||
# https://github.com/python/cpython/issues/109849) or full range (on some
|
||||
# platforms, see https://github.com/python/cpython/issues/110042) of
|
||||
# datetime.
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
delta = datetime.timedelta(
|
||||
seconds=self.seconds,
|
||||
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND),
|
||||
)
|
||||
if tzinfo is None:
|
||||
return _EPOCH_DATETIME_NAIVE + delta
|
||||
else:
|
||||
# Note the tz conversion has to come after the timedelta arithmetic.
|
||||
return (_EPOCH_DATETIME_AWARE + delta).astimezone(tzinfo)
|
||||
|
||||
def FromDatetime(self, dt):
|
||||
"""Converts datetime to Timestamp.
|
||||
|
||||
Args:
|
||||
dt: A datetime. If it's timezone-naive, it's assumed to be in UTC.
|
||||
"""
|
||||
# Using this guide: http://wiki.python.org/moin/WorkingWithTime
|
||||
# And this conversion guide: http://docs.python.org/library/time.html
|
||||
|
||||
# Turn the date parameter into a tuple (struct_time) that can then be
|
||||
# manipulated into a long value of seconds. During the conversion from
|
||||
# struct_time to long, the source date in UTC, and so it follows that the
|
||||
# correct transformation is calendar.timegm()
|
||||
try:
|
||||
seconds = calendar.timegm(dt.utctimetuple())
|
||||
nanos = dt.microsecond * _NANOS_PER_MICROSECOND
|
||||
except AttributeError as e:
|
||||
raise AttributeError(
|
||||
'Fail to convert to Timestamp. Expected a datetime like '
|
||||
'object got {0} : {1}'.format(type(dt).__name__, e)
|
||||
) from e
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def _internal_assign(self, dt):
|
||||
self.FromDatetime(dt)
|
||||
|
||||
def __add__(self, value) -> datetime.datetime:
|
||||
if isinstance(value, Duration):
|
||||
return self.ToDatetime() + value.ToTimedelta()
|
||||
return self.ToDatetime() + value
|
||||
|
||||
__radd__ = __add__
|
||||
|
||||
def __sub__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
if isinstance(value, Timestamp):
|
||||
return self.ToDatetime() - value.ToDatetime()
|
||||
elif isinstance(value, Duration):
|
||||
return self.ToDatetime() - value.ToTimedelta()
|
||||
return self.ToDatetime() - value
|
||||
|
||||
def __rsub__(self, dt) -> datetime.timedelta:
|
||||
return dt - self.ToDatetime()
|
||||
|
||||
|
||||
def _CheckTimestampValid(seconds, nanos):
|
||||
if seconds < _TIMESTAMP_SECONDS_MIN or seconds > _TIMESTAMP_SECONDS_MAX:
|
||||
raise ValueError(
|
||||
'Timestamp is not valid: Seconds {0} must be in range '
|
||||
'[-62135596800, 253402300799].'.format(seconds))
|
||||
if nanos < 0 or nanos >= _NANOS_PER_SECOND:
|
||||
raise ValueError(
|
||||
'Timestamp is not valid: Nanos {} must be in a range '
|
||||
'[0, 999999].'.format(nanos)
|
||||
)
|
||||
|
||||
|
||||
class Duration(object):
|
||||
"""Class for Duration message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts Duration to string format.
|
||||
|
||||
Returns:
|
||||
A string converted from self. The string format will contains
|
||||
3, 6, or 9 fractional digits depending on the precision required to
|
||||
represent the exact Duration value. For example: "1s", "1.010s",
|
||||
"1.000000100s", "-3.100s"
|
||||
"""
|
||||
_CheckDurationValid(self.seconds, self.nanos)
|
||||
if self.seconds < 0 or self.nanos < 0:
|
||||
result = '-'
|
||||
seconds = -self.seconds + int((0 - self.nanos) // 1e9)
|
||||
nanos = (0 - self.nanos) % 1e9
|
||||
else:
|
||||
result = ''
|
||||
seconds = self.seconds + int(self.nanos // 1e9)
|
||||
nanos = self.nanos % 1e9
|
||||
result += '%d' % seconds
|
||||
if (nanos % 1e9) == 0:
|
||||
# If there are 0 fractional digits, the fractional
|
||||
# point '.' should be omitted when serializing.
|
||||
return result + 's'
|
||||
if (nanos % 1e6) == 0:
|
||||
# Serialize 3 fractional digits.
|
||||
return result + '.%03ds' % (nanos / 1e6)
|
||||
if (nanos % 1e3) == 0:
|
||||
# Serialize 6 fractional digits.
|
||||
return result + '.%06ds' % (nanos / 1e3)
|
||||
# Serialize 9 fractional digits.
|
||||
return result + '.%09ds' % nanos
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Converts a string to Duration.
|
||||
|
||||
Args:
|
||||
value: A string to be converted. The string must end with 's'. Any
|
||||
fractional digits (or none) are accepted as long as they fit into
|
||||
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('Duration JSON value not a string: {!r}'.format(value))
|
||||
if len(value) < 1 or value[-1] != 's':
|
||||
raise ValueError('Duration must end with letter "s": {0}.'.format(value))
|
||||
try:
|
||||
pos = value.find('.')
|
||||
if pos == -1:
|
||||
seconds = int(value[:-1])
|
||||
nanos = 0
|
||||
else:
|
||||
seconds = int(value[:pos])
|
||||
if value[0] == '-':
|
||||
nanos = int(round(float('-0{0}'.format(value[pos:-1])) * 1e9))
|
||||
else:
|
||||
nanos = int(round(float('0{0}'.format(value[pos:-1])) * 1e9))
|
||||
_CheckDurationValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
except ValueError as e:
|
||||
raise ValueError("Couldn't parse duration: {0} : {1}.".format(value, e))
|
||||
|
||||
def ToNanoseconds(self):
|
||||
"""Converts a Duration to nanoseconds."""
|
||||
return self.seconds * _NANOS_PER_SECOND + self.nanos
|
||||
|
||||
def ToMicroseconds(self):
|
||||
"""Converts a Duration to microseconds."""
|
||||
micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND)
|
||||
return self.seconds * _MICROS_PER_SECOND + micros
|
||||
|
||||
def ToMilliseconds(self):
|
||||
"""Converts a Duration to milliseconds."""
|
||||
millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND)
|
||||
return self.seconds * _MILLIS_PER_SECOND + millis
|
||||
|
||||
def ToSeconds(self):
|
||||
"""Converts a Duration to seconds."""
|
||||
return self.seconds
|
||||
|
||||
def FromNanoseconds(self, nanos):
|
||||
"""Converts nanoseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
nanos // _NANOS_PER_SECOND, nanos % _NANOS_PER_SECOND
|
||||
)
|
||||
|
||||
def FromMicroseconds(self, micros):
|
||||
"""Converts microseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
micros // _MICROS_PER_SECOND,
|
||||
(micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND,
|
||||
)
|
||||
|
||||
def FromMilliseconds(self, millis):
|
||||
"""Converts milliseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
millis // _MILLIS_PER_SECOND,
|
||||
(millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND,
|
||||
)
|
||||
|
||||
def FromSeconds(self, seconds):
|
||||
"""Converts seconds to Duration."""
|
||||
self.seconds = seconds
|
||||
self.nanos = 0
|
||||
|
||||
def ToTimedelta(self) -> datetime.timedelta:
|
||||
"""Converts Duration to timedelta."""
|
||||
return datetime.timedelta(
|
||||
seconds=self.seconds,
|
||||
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND),
|
||||
)
|
||||
|
||||
def FromTimedelta(self, td):
|
||||
"""Converts timedelta to Duration."""
|
||||
try:
|
||||
self._NormalizeDuration(
|
||||
td.seconds + td.days * _SECONDS_PER_DAY,
|
||||
td.microseconds * _NANOS_PER_MICROSECOND,
|
||||
)
|
||||
except AttributeError as e:
|
||||
raise AttributeError(
|
||||
'Fail to convert to Duration. Expected a timedelta like '
|
||||
'object got {0}: {1}'.format(type(td).__name__, e)
|
||||
) from e
|
||||
|
||||
def _internal_assign(self, td):
|
||||
self.FromTimedelta(td)
|
||||
|
||||
def _NormalizeDuration(self, seconds, nanos):
|
||||
"""Set Duration by seconds and nanos."""
|
||||
# Force nanos to be negative if the duration is negative.
|
||||
if seconds < 0 and nanos > 0:
|
||||
seconds += 1
|
||||
nanos -= _NANOS_PER_SECOND
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def __add__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
if isinstance(value, Timestamp):
|
||||
return self.ToTimedelta() + value.ToDatetime()
|
||||
return self.ToTimedelta() + value
|
||||
|
||||
__radd__ = __add__
|
||||
|
||||
def __sub__(self, value) -> datetime.timedelta:
|
||||
return self.ToTimedelta() - value
|
||||
|
||||
def __rsub__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
return value - self.ToTimedelta()
|
||||
|
||||
|
||||
def _CheckDurationValid(seconds, nanos):
|
||||
if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX:
|
||||
raise ValueError(
|
||||
'Duration is not valid: Seconds {0} must be in range '
|
||||
'[-315576000000, 315576000000].'.format(seconds)
|
||||
)
|
||||
if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND:
|
||||
raise ValueError(
|
||||
'Duration is not valid: Nanos {0} must be in range '
|
||||
'[-999999999, 999999999].'.format(nanos)
|
||||
)
|
||||
if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0):
|
||||
raise ValueError('Duration is not valid: Sign mismatch.')
|
||||
|
||||
|
||||
def _RoundTowardZero(value, divider):
|
||||
"""Truncates the remainder part after division."""
|
||||
# For some languages, the sign of the remainder is implementation
|
||||
# dependent if any of the operands is negative. Here we enforce
|
||||
# "rounded toward zero" semantics. For example, for (-5) / 2 an
|
||||
# implementation may give -3 as the result with the remainder being
|
||||
# 1. This function ensures we always return -2 (closer to zero).
|
||||
result = value // divider
|
||||
remainder = value % divider
|
||||
if result < 0 and remainder > 0:
|
||||
return result + 1
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
def _SetStructValue(struct_value, value):
|
||||
if value is None:
|
||||
struct_value.null_value = 0
|
||||
elif isinstance(value, bool):
|
||||
# Note: this check must come before the number check because in Python
|
||||
# True and False are also considered numbers.
|
||||
struct_value.bool_value = value
|
||||
elif isinstance(value, str):
|
||||
struct_value.string_value = value
|
||||
elif isinstance(value, (int, float)):
|
||||
struct_value.number_value = value
|
||||
elif isinstance(value, (dict, Struct)):
|
||||
struct_value.struct_value.Clear()
|
||||
struct_value.struct_value.update(value)
|
||||
elif isinstance(value, (list, tuple, ListValue)):
|
||||
struct_value.list_value.Clear()
|
||||
struct_value.list_value.extend(value)
|
||||
else:
|
||||
raise ValueError('Unexpected type')
|
||||
|
||||
|
||||
def _GetStructValue(struct_value):
|
||||
which = struct_value.WhichOneof('kind')
|
||||
if which == 'struct_value':
|
||||
return struct_value.struct_value
|
||||
elif which == 'null_value':
|
||||
return None
|
||||
elif which == 'number_value':
|
||||
return struct_value.number_value
|
||||
elif which == 'string_value':
|
||||
return struct_value.string_value
|
||||
elif which == 'bool_value':
|
||||
return struct_value.bool_value
|
||||
elif which == 'list_value':
|
||||
return struct_value.list_value
|
||||
elif which is None:
|
||||
raise ValueError('Value not set')
|
||||
|
||||
|
||||
class Struct(object):
|
||||
"""Class for Struct message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def __getitem__(self, key):
|
||||
return _GetStructValue(self.fields[key])
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
_SetStructValue(self.fields[key], value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self.fields[key]
|
||||
|
||||
def __len__(self):
|
||||
return len(self.fields)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.fields)
|
||||
|
||||
def _internal_assign(self, dictionary):
|
||||
self.Clear()
|
||||
self.update(dictionary)
|
||||
|
||||
def _internal_compare(self, other):
|
||||
size = len(self)
|
||||
if size != len(other):
|
||||
return False
|
||||
for key, value in self.items():
|
||||
if key not in other:
|
||||
return False
|
||||
if isinstance(other[key], (dict, list)):
|
||||
if not value._internal_compare(other[key]):
|
||||
return False
|
||||
elif value != other[key]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def keys(self): # pylint: disable=invalid-name
|
||||
return self.fields.keys()
|
||||
|
||||
def values(self): # pylint: disable=invalid-name
|
||||
return [self[key] for key in self]
|
||||
|
||||
def items(self): # pylint: disable=invalid-name
|
||||
return [(key, self[key]) for key in self]
|
||||
|
||||
def get_or_create_list(self, key):
|
||||
"""Returns a list for this key, creating if it didn't exist already."""
|
||||
if not self.fields[key].HasField('list_value'):
|
||||
# Clear will mark list_value modified which will indeed create a list.
|
||||
self.fields[key].list_value.Clear()
|
||||
return self.fields[key].list_value
|
||||
|
||||
def get_or_create_struct(self, key):
|
||||
"""Returns a struct for this key, creating if it didn't exist already."""
|
||||
if not self.fields[key].HasField('struct_value'):
|
||||
# Clear will mark struct_value modified which will indeed create a struct.
|
||||
self.fields[key].struct_value.Clear()
|
||||
return self.fields[key].struct_value
|
||||
|
||||
def update(self, dictionary): # pylint: disable=invalid-name
|
||||
for key, value in dictionary.items():
|
||||
_SetStructValue(self.fields[key], value)
|
||||
|
||||
|
||||
collections.abc.MutableMapping.register(Struct)
|
||||
|
||||
|
||||
class ListValue(object):
|
||||
"""Class for ListValue message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def __len__(self):
|
||||
return len(self.values)
|
||||
|
||||
def append(self, value):
|
||||
_SetStructValue(self.values.add(), value)
|
||||
|
||||
def extend(self, elem_seq):
|
||||
for value in elem_seq:
|
||||
self.append(value)
|
||||
|
||||
def __getitem__(self, index):
|
||||
"""Retrieves item by the specified index."""
|
||||
return _GetStructValue(self.values.__getitem__(index))
|
||||
|
||||
def __setitem__(self, index, value):
|
||||
_SetStructValue(self.values.__getitem__(index), value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self.values[key]
|
||||
|
||||
def _internal_assign(self, elem_seq):
|
||||
self.Clear()
|
||||
self.extend(elem_seq)
|
||||
|
||||
def _internal_compare(self, other):
|
||||
size = len(self)
|
||||
if size != len(other):
|
||||
return False
|
||||
for i in range(size):
|
||||
if isinstance(other[i], (dict, list)):
|
||||
if not self[i]._internal_compare(other[i]):
|
||||
return False
|
||||
elif self[i] != other[i]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def items(self):
|
||||
for i in range(len(self)):
|
||||
yield self[i]
|
||||
|
||||
def add_struct(self):
|
||||
"""Appends and returns a struct value as the next value in the list."""
|
||||
struct_value = self.values.add().struct_value
|
||||
# Clear will mark struct_value modified which will indeed create a struct.
|
||||
struct_value.Clear()
|
||||
return struct_value
|
||||
|
||||
def add_list(self):
|
||||
"""Appends and returns a list value as the next value in the list."""
|
||||
list_value = self.values.add().list_value
|
||||
# Clear will mark list_value modified which will indeed create a list.
|
||||
list_value.Clear()
|
||||
return list_value
|
||||
|
||||
|
||||
collections.abc.MutableSequence.register(ListValue)
|
||||
|
||||
|
||||
# LINT.IfChange(wktbases)
|
||||
WKTBASES = {
|
||||
'google.protobuf.Any': Any,
|
||||
'google.protobuf.Duration': Duration,
|
||||
'google.protobuf.FieldMask': FieldMask,
|
||||
'google.protobuf.ListValue': ListValue,
|
||||
'google.protobuf.Struct': Struct,
|
||||
'google.protobuf.Timestamp': Timestamp,
|
||||
}
|
||||
# LINT.ThenChange(//depot/google.protobuf/compiler/python/pyi_generator.cc:wktbases)
|
||||
+245
@@ -0,0 +1,245 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Constants and static functions to support protocol buffer wire format."""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
import struct
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf import message
|
||||
|
||||
|
||||
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
|
||||
TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
|
||||
|
||||
# These numbers identify the wire type of a protocol buffer value.
|
||||
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
|
||||
# tag-and-type to store one of these WIRETYPE_* constants.
|
||||
# These values must match WireType enum in //google/protobuf/wire_format.h.
|
||||
WIRETYPE_VARINT = 0
|
||||
WIRETYPE_FIXED64 = 1
|
||||
WIRETYPE_LENGTH_DELIMITED = 2
|
||||
WIRETYPE_START_GROUP = 3
|
||||
WIRETYPE_END_GROUP = 4
|
||||
WIRETYPE_FIXED32 = 5
|
||||
_WIRETYPE_MAX = 5
|
||||
|
||||
|
||||
# Bounds for various integer types.
|
||||
INT32_MAX = int((1 << 31) - 1)
|
||||
INT32_MIN = int(-(1 << 31))
|
||||
UINT32_MAX = (1 << 32) - 1
|
||||
|
||||
INT64_MAX = (1 << 63) - 1
|
||||
INT64_MIN = -(1 << 63)
|
||||
UINT64_MAX = (1 << 64) - 1
|
||||
|
||||
# "struct" format strings that will encode/decode the specified formats.
|
||||
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
|
||||
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
|
||||
FORMAT_FLOAT_LITTLE_ENDIAN = '<f'
|
||||
FORMAT_DOUBLE_LITTLE_ENDIAN = '<d'
|
||||
|
||||
|
||||
# We'll have to provide alternate implementations of AppendLittleEndian*() on
|
||||
# any architectures where these checks fail.
|
||||
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
|
||||
raise AssertionError('Format "I" is not a 32-bit number.')
|
||||
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
|
||||
raise AssertionError('Format "Q" is not a 64-bit number.')
|
||||
|
||||
|
||||
def PackTag(field_number, wire_type):
|
||||
"""Returns an unsigned 32-bit integer that encodes the field number and
|
||||
wire type information in standard protocol message wire format.
|
||||
|
||||
Args:
|
||||
field_number: Expected to be an integer in the range [1, 1 << 29)
|
||||
wire_type: One of the WIRETYPE_* constants.
|
||||
"""
|
||||
if not 0 <= wire_type <= _WIRETYPE_MAX:
|
||||
raise message.EncodeError('Unknown wire type: %d' % wire_type)
|
||||
return (field_number << TAG_TYPE_BITS) | wire_type
|
||||
|
||||
|
||||
def UnpackTag(tag):
|
||||
"""The inverse of PackTag(). Given an unsigned 32-bit number,
|
||||
returns a (field_number, wire_type) tuple.
|
||||
"""
|
||||
return (tag >> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
|
||||
|
||||
|
||||
def ZigZagEncode(value):
|
||||
"""ZigZag Transform: Encodes signed integers so that they can be
|
||||
effectively used with varint encoding. See wire_format.h for
|
||||
more details.
|
||||
"""
|
||||
if value >= 0:
|
||||
return value << 1
|
||||
return (value << 1) ^ (~0)
|
||||
|
||||
|
||||
def ZigZagDecode(value):
|
||||
"""Inverse of ZigZagEncode()."""
|
||||
if not value & 0x1:
|
||||
return value >> 1
|
||||
return (value >> 1) ^ (~0)
|
||||
|
||||
|
||||
|
||||
# The *ByteSize() functions below return the number of bytes required to
|
||||
# serialize "field number + type" information and then serialize the value.
|
||||
|
||||
|
||||
def Int32ByteSize(field_number, int32):
|
||||
return Int64ByteSize(field_number, int32)
|
||||
|
||||
|
||||
def Int32ByteSizeNoTag(int32):
|
||||
return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
|
||||
|
||||
|
||||
def Int64ByteSize(field_number, int64):
|
||||
# Have to convert to uint before calling UInt64ByteSize().
|
||||
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
|
||||
|
||||
|
||||
def UInt32ByteSize(field_number, uint32):
|
||||
return UInt64ByteSize(field_number, uint32)
|
||||
|
||||
|
||||
def UInt64ByteSize(field_number, uint64):
|
||||
return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
|
||||
|
||||
|
||||
def SInt32ByteSize(field_number, int32):
|
||||
return UInt32ByteSize(field_number, ZigZagEncode(int32))
|
||||
|
||||
|
||||
def SInt64ByteSize(field_number, int64):
|
||||
return UInt64ByteSize(field_number, ZigZagEncode(int64))
|
||||
|
||||
|
||||
def Fixed32ByteSize(field_number, fixed32):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def Fixed64ByteSize(field_number, fixed64):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def SFixed32ByteSize(field_number, sfixed32):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def SFixed64ByteSize(field_number, sfixed64):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def FloatByteSize(field_number, flt):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def DoubleByteSize(field_number, double):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def BoolByteSize(field_number, b):
|
||||
return TagByteSize(field_number) + 1
|
||||
|
||||
|
||||
def EnumByteSize(field_number, enum):
|
||||
return UInt32ByteSize(field_number, enum)
|
||||
|
||||
|
||||
def StringByteSize(field_number, string):
|
||||
return BytesByteSize(field_number, string.encode('utf-8'))
|
||||
|
||||
|
||||
def BytesByteSize(field_number, b):
|
||||
return (TagByteSize(field_number)
|
||||
+ _VarUInt64ByteSizeNoTag(len(b))
|
||||
+ len(b))
|
||||
|
||||
|
||||
def GroupByteSize(field_number, message):
|
||||
return (2 * TagByteSize(field_number) # START and END group.
|
||||
+ message.ByteSize())
|
||||
|
||||
|
||||
def MessageByteSize(field_number, message):
|
||||
return (TagByteSize(field_number)
|
||||
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
|
||||
+ message.ByteSize())
|
||||
|
||||
|
||||
def MessageSetItemByteSize(field_number, msg):
|
||||
# First compute the sizes of the tags.
|
||||
# There are 2 tags for the beginning and ending of the repeated group, that
|
||||
# is field number 1, one with field number 2 (type_id) and one with field
|
||||
# number 3 (message).
|
||||
total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
|
||||
|
||||
# Add the number of bytes for type_id.
|
||||
total_size += _VarUInt64ByteSizeNoTag(field_number)
|
||||
|
||||
message_size = msg.ByteSize()
|
||||
|
||||
# The number of bytes for encoding the length of the message.
|
||||
total_size += _VarUInt64ByteSizeNoTag(message_size)
|
||||
|
||||
# The size of the message.
|
||||
total_size += message_size
|
||||
return total_size
|
||||
|
||||
|
||||
def TagByteSize(field_number):
|
||||
"""Returns the bytes required to serialize a tag with this field number."""
|
||||
# Just pass in type 0, since the type won't affect the tag+type size.
|
||||
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
|
||||
|
||||
|
||||
# Private helper function for the *ByteSize() functions above.
|
||||
|
||||
def _VarUInt64ByteSizeNoTag(uint64):
|
||||
"""Returns the number of bytes required to serialize a single varint
|
||||
using boundary value comparisons. (unrolled loop optimization -WPierce)
|
||||
uint64 must be unsigned.
|
||||
"""
|
||||
if uint64 <= 0x7f: return 1
|
||||
if uint64 <= 0x3fff: return 2
|
||||
if uint64 <= 0x1fffff: return 3
|
||||
if uint64 <= 0xfffffff: return 4
|
||||
if uint64 <= 0x7ffffffff: return 5
|
||||
if uint64 <= 0x3ffffffffff: return 6
|
||||
if uint64 <= 0x1ffffffffffff: return 7
|
||||
if uint64 <= 0xffffffffffffff: return 8
|
||||
if uint64 <= 0x7fffffffffffffff: return 9
|
||||
if uint64 > UINT64_MAX:
|
||||
raise message.EncodeError('Value out of range: %d' % uint64)
|
||||
return 10
|
||||
|
||||
|
||||
NON_PACKABLE_TYPES = (
|
||||
descriptor.FieldDescriptor.TYPE_STRING,
|
||||
descriptor.FieldDescriptor.TYPE_GROUP,
|
||||
descriptor.FieldDescriptor.TYPE_MESSAGE,
|
||||
descriptor.FieldDescriptor.TYPE_BYTES
|
||||
)
|
||||
|
||||
|
||||
def IsTypePackable(field_type):
|
||||
"""Return true iff packable = true is valid for fields of this type.
|
||||
|
||||
Args:
|
||||
field_type: a FieldDescriptor::Type value.
|
||||
|
||||
Returns:
|
||||
True iff fields of this type are packable.
|
||||
"""
|
||||
return field_type not in NON_PACKABLE_TYPES
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,448 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
# TODO: We should just make these methods all "pure-virtual" and move
|
||||
# all implementation out, into reflection.py for now.
|
||||
|
||||
|
||||
"""Contains an abstract base class for protocol messages."""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
_INCONSISTENT_MESSAGE_ATTRIBUTES = ('Extensions',)
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
"""Base error type for this module."""
|
||||
pass
|
||||
|
||||
|
||||
class DecodeError(Error):
|
||||
"""Exception raised when deserializing messages."""
|
||||
pass
|
||||
|
||||
|
||||
class EncodeError(Error):
|
||||
"""Exception raised when serializing messages."""
|
||||
pass
|
||||
|
||||
|
||||
class Message(object):
|
||||
|
||||
"""Abstract base class for protocol messages.
|
||||
|
||||
Protocol message classes are almost always generated by the protocol
|
||||
compiler. These generated types subclass Message and implement the methods
|
||||
shown below.
|
||||
"""
|
||||
|
||||
# TODO: Link to an HTML document here.
|
||||
|
||||
# TODO: Document that instances of this class will also
|
||||
# have an Extensions attribute with __getitem__ and __setitem__.
|
||||
# Again, not sure how to best convey this.
|
||||
|
||||
# TODO: Document these fields and methods.
|
||||
|
||||
__slots__ = []
|
||||
|
||||
#: The :class:`google.protobuf.Descriptor`
|
||||
# for this message type.
|
||||
DESCRIPTOR = None
|
||||
|
||||
def __deepcopy__(self, memo=None):
|
||||
clone = type(self)()
|
||||
clone.MergeFrom(self)
|
||||
return clone
|
||||
|
||||
def __dir__(self):
|
||||
"""Provides the list of all accessible Message attributes."""
|
||||
message_attributes = set(super().__dir__())
|
||||
|
||||
# TODO: Remove this once the UPB implementation is improved.
|
||||
# The UPB proto implementation currently doesn't provide proto fields as
|
||||
# attributes and they have to added.
|
||||
if self.DESCRIPTOR is not None:
|
||||
for field in self.DESCRIPTOR.fields:
|
||||
message_attributes.add(field.name)
|
||||
|
||||
# The Fast C++ proto implementation provides inaccessible attributes that
|
||||
# have to be removed.
|
||||
for attribute in _INCONSISTENT_MESSAGE_ATTRIBUTES:
|
||||
if attribute not in message_attributes:
|
||||
continue
|
||||
try:
|
||||
getattr(self, attribute)
|
||||
except AttributeError:
|
||||
message_attributes.remove(attribute)
|
||||
|
||||
return sorted(message_attributes)
|
||||
|
||||
def __eq__(self, other_msg):
|
||||
"""Recursively compares two messages by value and structure."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __ne__(self, other_msg):
|
||||
# Can't just say self != other_msg, since that would infinitely recurse. :)
|
||||
return not self == other_msg
|
||||
|
||||
def __hash__(self):
|
||||
raise TypeError('unhashable object')
|
||||
|
||||
def __str__(self):
|
||||
"""Outputs a human-readable representation of the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __unicode__(self):
|
||||
"""Outputs a human-readable representation of the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __contains__(self, field_name_or_key):
|
||||
"""Checks if a certain field is set for the message.
|
||||
|
||||
Has presence fields return true if the field is set, false if the field is
|
||||
not set. Fields without presence do raise `ValueError` (this includes
|
||||
repeated fields, map fields, and implicit presence fields).
|
||||
|
||||
If field_name is not defined in the message descriptor, `ValueError` will
|
||||
be raised.
|
||||
Note: WKT Struct checks if the key is contained in fields. ListValue checks
|
||||
if the item is contained in the list.
|
||||
|
||||
Args:
|
||||
field_name_or_key: For Struct, the key (str) of the fields map. For
|
||||
ListValue, any type that may be contained in the list. For other
|
||||
messages, name of the field (str) to check for presence.
|
||||
|
||||
Returns:
|
||||
bool: For Struct, whether the item is contained in fields. For ListValue,
|
||||
whether the item is contained in the list. For other message,
|
||||
whether a value has been set for the named field.
|
||||
|
||||
Raises:
|
||||
ValueError: For normal messages, if the `field_name_or_key` is not a
|
||||
member of this message or `field_name_or_key` is not a string.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def MergeFrom(self, other_msg):
|
||||
"""Merges the contents of the specified message into current message.
|
||||
|
||||
This method merges the contents of the specified message into the current
|
||||
message. Singular fields that are set in the specified message overwrite
|
||||
the corresponding fields in the current message. Repeated fields are
|
||||
appended. Singular sub-messages and groups are recursively merged.
|
||||
|
||||
Args:
|
||||
other_msg (Message): A message to merge into the current message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def CopyFrom(self, other_msg):
|
||||
"""Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
Args:
|
||||
other_msg (Message): A message to copy into the current one.
|
||||
"""
|
||||
if self is other_msg:
|
||||
return
|
||||
self.Clear()
|
||||
self.MergeFrom(other_msg)
|
||||
|
||||
def Clear(self):
|
||||
"""Clears all data that was set in the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def SetInParent(self):
|
||||
"""Mark this as present in the parent.
|
||||
|
||||
This normally happens automatically when you assign a field of a
|
||||
sub-message, but sometimes you want to make the sub-message
|
||||
present while keeping it empty. If you find yourself using this,
|
||||
you may want to reconsider your design.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def IsInitialized(self):
|
||||
"""Checks if the message is initialized.
|
||||
|
||||
Returns:
|
||||
bool: The method returns True if the message is initialized (i.e. all of
|
||||
its required fields are set).
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
# TODO: MergeFromString() should probably return None and be
|
||||
# implemented in terms of a helper that returns the # of bytes read. Our
|
||||
# deserialization routines would use the helper when recursively
|
||||
# deserializing, but the end user would almost always just want the no-return
|
||||
# MergeFromString().
|
||||
|
||||
def MergeFromString(self, serialized):
|
||||
"""Merges serialized protocol buffer data into this message.
|
||||
|
||||
When we find a field in `serialized` that is already present
|
||||
in this message:
|
||||
|
||||
- If it's a "repeated" field, we append to the end of our list.
|
||||
- Else, if it's a scalar, we overwrite our field.
|
||||
- Else, (it's a nonrepeated composite), we recursively merge
|
||||
into the existing composite.
|
||||
|
||||
Args:
|
||||
serialized (bytes): Any object that allows us to call
|
||||
``memoryview(serialized)`` to access a string of bytes using the
|
||||
buffer interface.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes read from `serialized`.
|
||||
For non-group messages, this will always be `len(serialized)`,
|
||||
but for messages which are actually groups, this will
|
||||
generally be less than `len(serialized)`, since we must
|
||||
stop when we reach an ``END_GROUP`` tag. Note that if
|
||||
we *do* stop because of an ``END_GROUP`` tag, the number
|
||||
of bytes returned does not include the bytes
|
||||
for the ``END_GROUP`` tag information.
|
||||
|
||||
Raises:
|
||||
DecodeError: if the input cannot be parsed.
|
||||
"""
|
||||
# TODO: Document handling of unknown fields.
|
||||
# TODO: When we switch to a helper, this will return None.
|
||||
raise NotImplementedError
|
||||
|
||||
def ParseFromString(self, serialized):
|
||||
"""Parse serialized protocol buffer data in binary form into this message.
|
||||
|
||||
Like :func:`MergeFromString()`, except we clear the object first.
|
||||
|
||||
Raises:
|
||||
message.DecodeError if the input cannot be parsed.
|
||||
"""
|
||||
self.Clear()
|
||||
return self.MergeFromString(serialized)
|
||||
|
||||
def SerializeToString(self, **kwargs):
|
||||
"""Serializes the protocol message to a binary string.
|
||||
|
||||
Keyword Args:
|
||||
deterministic (bool): If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
A binary string representation of the message if all of the required
|
||||
fields in the message are set (i.e. the message is initialized).
|
||||
|
||||
Raises:
|
||||
EncodeError: if the message isn't initialized (see :func:`IsInitialized`).
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def SerializePartialToString(self, **kwargs):
|
||||
"""Serializes the protocol message to a binary string.
|
||||
|
||||
This method is similar to SerializeToString but doesn't check if the
|
||||
message is initialized.
|
||||
|
||||
Keyword Args:
|
||||
deterministic (bool): If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
bytes: A serialized representation of the partial message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
# TODO: Decide whether we like these better
|
||||
# than auto-generated has_foo() and clear_foo() methods
|
||||
# on the instances themselves. This way is less consistent
|
||||
# with C++, but it makes reflection-type access easier and
|
||||
# reduces the number of magically autogenerated things.
|
||||
#
|
||||
# TODO: Be sure to document (and test) exactly
|
||||
# which field names are accepted here. Are we case-sensitive?
|
||||
# What do we do with fields that share names with Python keywords
|
||||
# like 'lambda' and 'yield'?
|
||||
#
|
||||
# nnorwitz says:
|
||||
# """
|
||||
# Typically (in python), an underscore is appended to names that are
|
||||
# keywords. So they would become lambda_ or yield_.
|
||||
# """
|
||||
def ListFields(self):
|
||||
"""Returns a list of (FieldDescriptor, value) tuples for present fields.
|
||||
|
||||
A message field is non-empty if HasField() would return true. A singular
|
||||
primitive field is non-empty if HasField() would return true in proto2 or it
|
||||
is non zero in proto3. A repeated field is non-empty if it contains at least
|
||||
one element. The fields are ordered by field number.
|
||||
|
||||
Returns:
|
||||
list[tuple(FieldDescriptor, value)]: field descriptors and values
|
||||
for all fields in the message which are not empty. The values vary by
|
||||
field type.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def HasField(self, field_name):
|
||||
"""Checks if a certain field is set for the message.
|
||||
|
||||
For a oneof group, checks if any field inside is set. Note that if the
|
||||
field_name is not defined in the message descriptor, :exc:`ValueError` will
|
||||
be raised.
|
||||
|
||||
Args:
|
||||
field_name (str): The name of the field to check for presence.
|
||||
|
||||
Returns:
|
||||
bool: Whether a value has been set for the named field.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ClearField(self, field_name):
|
||||
"""Clears the contents of a given field.
|
||||
|
||||
Inside a oneof group, clears the field set. If the name neither refers to a
|
||||
defined field or oneof group, :exc:`ValueError` is raised.
|
||||
|
||||
Args:
|
||||
field_name (str): The name of the field to check for presence.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def WhichOneof(self, oneof_group):
|
||||
"""Returns the name of the field that is set inside a oneof group.
|
||||
|
||||
If no field is set, returns None.
|
||||
|
||||
Args:
|
||||
oneof_group (str): the name of the oneof group to check.
|
||||
|
||||
Returns:
|
||||
str or None: The name of the group that is set, or None.
|
||||
|
||||
Raises:
|
||||
ValueError: no group with the given name exists
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def HasExtension(self, field_descriptor):
|
||||
"""Checks if a certain extension is present for this message.
|
||||
|
||||
Extensions are retrieved using the :attr:`Extensions` mapping (if present).
|
||||
|
||||
Args:
|
||||
field_descriptor: The field descriptor for the extension to check.
|
||||
|
||||
Returns:
|
||||
bool: Whether the extension is present for this message.
|
||||
|
||||
Raises:
|
||||
KeyError: if the extension is repeated. Similar to repeated fields,
|
||||
there is no separate notion of presence: a "not present" repeated
|
||||
extension is an empty list.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ClearExtension(self, field_descriptor):
|
||||
"""Clears the contents of a given extension.
|
||||
|
||||
Args:
|
||||
field_descriptor: The field descriptor for the extension to clear.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def UnknownFields(self):
|
||||
"""Returns the UnknownFieldSet.
|
||||
|
||||
Returns:
|
||||
UnknownFieldSet: The unknown fields stored in this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def DiscardUnknownFields(self):
|
||||
"""Clears all fields in the :class:`UnknownFieldSet`.
|
||||
|
||||
This operation is recursive for nested message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ByteSize(self):
|
||||
"""Returns the serialized size of this message.
|
||||
|
||||
Recursively calls ByteSize() on all contained messages.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes required to serialize this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
@classmethod
|
||||
def FromString(cls, s):
|
||||
raise NotImplementedError
|
||||
|
||||
def _SetListener(self, message_listener):
|
||||
"""Internal method used by the protocol message implementation.
|
||||
Clients should not call this directly.
|
||||
|
||||
Sets a listener that this message will call on certain state transitions.
|
||||
|
||||
The purpose of this method is to register back-edges from children to
|
||||
parents at runtime, for the purpose of setting "has" bits and
|
||||
byte-size-dirty bits in the parent and ancestor objects whenever a child or
|
||||
descendant object is modified.
|
||||
|
||||
If the client wants to disconnect this Message from the object tree, she
|
||||
explicitly sets callback to None.
|
||||
|
||||
If message_listener is None, unregisters any existing listener. Otherwise,
|
||||
message_listener must implement the MessageListener interface in
|
||||
internal/message_listener.py, and we discard any listener registered
|
||||
via a previous _SetListener() call.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def __getstate__(self):
|
||||
"""Support the pickle protocol."""
|
||||
return dict(serialized=self.SerializePartialToString())
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Support the pickle protocol."""
|
||||
self.__init__()
|
||||
serialized = state['serialized']
|
||||
# On Python 3, using encoding='latin1' is required for unpickling
|
||||
# protos pickled by Python 2.
|
||||
if not isinstance(serialized, bytes):
|
||||
serialized = serialized.encode('latin1')
|
||||
self.ParseFromString(serialized)
|
||||
|
||||
def __reduce__(self):
|
||||
message_descriptor = self.DESCRIPTOR
|
||||
if message_descriptor.containing_type is None:
|
||||
return type(self), (), self.__getstate__()
|
||||
# the message type must be nested.
|
||||
# Python does not pickle nested classes; use the symbol_database on the
|
||||
# receiving end.
|
||||
container = message_descriptor
|
||||
return (_InternalConstructMessage, (container.full_name,),
|
||||
self.__getstate__())
|
||||
|
||||
|
||||
def _InternalConstructMessage(full_name):
|
||||
"""Constructs a nested message."""
|
||||
from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top
|
||||
|
||||
return symbol_database.Default().GetSymbol(full_name)()
|
||||
+190
@@ -0,0 +1,190 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides a factory class for generating dynamic messages.
|
||||
|
||||
The easiest way to use this class is if you have access to the FileDescriptor
|
||||
protos containing the messages you want to create you can just do the following:
|
||||
|
||||
message_classes = message_factory.GetMessages(iterable_of_file_descriptors)
|
||||
my_proto_instance = message_classes['some.proto.package.MessageName']()
|
||||
"""
|
||||
|
||||
__author__ = 'matthewtoia@google.com (Matt Toia)'
|
||||
|
||||
import warnings
|
||||
|
||||
from google.protobuf import descriptor_pool
|
||||
from google.protobuf import message
|
||||
from google.protobuf.internal import api_implementation
|
||||
|
||||
if api_implementation.Type() == 'python':
|
||||
from google.protobuf.internal import python_message as message_impl
|
||||
else:
|
||||
from google.protobuf.pyext import cpp_message as message_impl # pylint: disable=g-import-not-at-top
|
||||
|
||||
|
||||
# The type of all Message classes.
|
||||
_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType
|
||||
|
||||
|
||||
def GetMessageClass(descriptor):
|
||||
"""Obtains a proto2 message class based on the passed in descriptor.
|
||||
|
||||
Passing a descriptor with a fully qualified name matching a previous
|
||||
invocation will cause the same class to be returned.
|
||||
|
||||
Args:
|
||||
descriptor: The descriptor to build from.
|
||||
|
||||
Returns:
|
||||
A class describing the passed in descriptor.
|
||||
"""
|
||||
concrete_class = getattr(descriptor, '_concrete_class', None)
|
||||
if concrete_class:
|
||||
return concrete_class
|
||||
return _InternalCreateMessageClass(descriptor)
|
||||
|
||||
|
||||
def GetMessageClassesForFiles(files, pool):
|
||||
"""Gets all the messages from specified files.
|
||||
|
||||
This will find and resolve dependencies, failing if the descriptor
|
||||
pool cannot satisfy them.
|
||||
|
||||
This will not return the classes for nested types within those classes, for
|
||||
those, use GetMessageClass() on the nested types within their containing
|
||||
messages.
|
||||
|
||||
For example, for the message:
|
||||
|
||||
message NestedTypeMessage {
|
||||
message NestedType {
|
||||
string data = 1;
|
||||
}
|
||||
NestedType nested = 1;
|
||||
}
|
||||
|
||||
NestedTypeMessage will be in the result, but not
|
||||
NestedTypeMessage.NestedType.
|
||||
|
||||
Args:
|
||||
files: The file names to extract messages from.
|
||||
pool: The descriptor pool to find the files including the dependent files.
|
||||
|
||||
Returns:
|
||||
A dictionary mapping proto names to the message classes.
|
||||
"""
|
||||
result = {}
|
||||
for file_name in files:
|
||||
file_desc = pool.FindFileByName(file_name)
|
||||
for desc in file_desc.message_types_by_name.values():
|
||||
result[desc.full_name] = GetMessageClass(desc)
|
||||
|
||||
# While the extension FieldDescriptors are created by the descriptor pool,
|
||||
# the python classes created in the factory need them to be registered
|
||||
# explicitly, which is done below.
|
||||
#
|
||||
# The call to RegisterExtension will specifically check if the
|
||||
# extension was already registered on the object and either
|
||||
# ignore the registration if the original was the same, or raise
|
||||
# an error if they were different.
|
||||
|
||||
for extension in file_desc.extensions_by_name.values():
|
||||
_ = GetMessageClass(extension.containing_type)
|
||||
if api_implementation.Type() != 'python':
|
||||
# TODO: Remove this check here. Duplicate extension
|
||||
# register check should be in descriptor_pool.
|
||||
if extension is not pool.FindExtensionByNumber(
|
||||
extension.containing_type, extension.number
|
||||
):
|
||||
raise ValueError('Double registration of Extensions')
|
||||
# Recursively load protos for extension field, in order to be able to
|
||||
# fully represent the extension. This matches the behavior for regular
|
||||
# fields too.
|
||||
if extension.message_type:
|
||||
GetMessageClass(extension.message_type)
|
||||
return result
|
||||
|
||||
|
||||
def _InternalCreateMessageClass(descriptor):
|
||||
"""Builds a proto2 message class based on the passed in descriptor.
|
||||
|
||||
Args:
|
||||
descriptor: The descriptor to build from.
|
||||
|
||||
Returns:
|
||||
A class describing the passed in descriptor.
|
||||
"""
|
||||
descriptor_name = descriptor.name
|
||||
result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE(
|
||||
descriptor_name,
|
||||
(message.Message,),
|
||||
{
|
||||
'DESCRIPTOR': descriptor,
|
||||
# If module not set, it wrongly points to message_factory module.
|
||||
'__module__': None,
|
||||
},
|
||||
)
|
||||
for field in descriptor.fields:
|
||||
if field.message_type:
|
||||
GetMessageClass(field.message_type)
|
||||
|
||||
for extension in result_class.DESCRIPTOR.extensions:
|
||||
extended_class = GetMessageClass(extension.containing_type)
|
||||
if api_implementation.Type() != 'python':
|
||||
# TODO: Remove this check here. Duplicate extension
|
||||
# register check should be in descriptor_pool.
|
||||
pool = extension.containing_type.file.pool
|
||||
if extension is not pool.FindExtensionByNumber(
|
||||
extension.containing_type, extension.number
|
||||
):
|
||||
raise ValueError('Double registration of Extensions')
|
||||
if extension.message_type:
|
||||
GetMessageClass(extension.message_type)
|
||||
return result_class
|
||||
|
||||
|
||||
# Deprecated. Please use GetMessageClass() or GetMessageClassesForFiles()
|
||||
# method above instead.
|
||||
class MessageFactory(object):
|
||||
"""Factory for creating Proto2 messages from descriptors in a pool."""
|
||||
|
||||
def __init__(self, pool=None):
|
||||
"""Initializes a new factory."""
|
||||
self.pool = pool or descriptor_pool.DescriptorPool()
|
||||
|
||||
|
||||
def GetMessages(file_protos, pool=None):
|
||||
"""Builds a dictionary of all the messages available in a set of files.
|
||||
|
||||
Args:
|
||||
file_protos: Iterable of FileDescriptorProto to build messages out of.
|
||||
pool: The descriptor pool to add the file protos.
|
||||
|
||||
Returns:
|
||||
A dictionary mapping proto names to the message classes. This will include
|
||||
any dependent messages as well as any messages defined in the same file as
|
||||
a specified message.
|
||||
"""
|
||||
# The cpp implementation of the protocol buffer library requires to add the
|
||||
# message in topological order of the dependency graph.
|
||||
des_pool = pool or descriptor_pool.DescriptorPool()
|
||||
file_by_name = {file_proto.name: file_proto for file_proto in file_protos}
|
||||
|
||||
def _AddFile(file_proto):
|
||||
for dependency in file_proto.dependency:
|
||||
if dependency in file_by_name:
|
||||
# Remove from elements to be visited, in order to cut cycles.
|
||||
_AddFile(file_by_name.pop(dependency))
|
||||
des_pool.Add(file_proto)
|
||||
|
||||
while file_by_name:
|
||||
_AddFile(file_by_name.popitem()[1])
|
||||
return GetMessageClassesForFiles(
|
||||
[file_proto.name for file_proto in file_protos], des_pool
|
||||
)
|
||||
@@ -0,0 +1,153 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Nextgen Pythonic protobuf APIs."""
|
||||
|
||||
import io
|
||||
from typing import Text, Type, TypeVar
|
||||
|
||||
from google.protobuf.internal import decoder
|
||||
from google.protobuf.internal import encoder
|
||||
from google.protobuf.message import Message
|
||||
|
||||
_MESSAGE = TypeVar('_MESSAGE', bound='Message')
|
||||
|
||||
|
||||
def serialize(message: _MESSAGE, deterministic: bool = None) -> bytes:
|
||||
"""Return the serialized proto.
|
||||
|
||||
Args:
|
||||
message: The proto message to be serialized.
|
||||
deterministic: If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
A binary bytes representation of the message.
|
||||
"""
|
||||
return message.SerializeToString(deterministic=deterministic)
|
||||
|
||||
|
||||
def parse(message_class: Type[_MESSAGE], payload: bytes) -> _MESSAGE:
|
||||
"""Given a serialized data in binary form, deserialize it into a Message.
|
||||
|
||||
Args:
|
||||
message_class: The message meta class.
|
||||
payload: A serialized bytes in binary form.
|
||||
|
||||
Returns:
|
||||
A new message deserialized from payload.
|
||||
"""
|
||||
new_message = message_class()
|
||||
new_message.ParseFromString(payload)
|
||||
return new_message
|
||||
|
||||
|
||||
def serialize_length_prefixed(message: _MESSAGE, output: io.BytesIO) -> None:
|
||||
"""Writes the size of the message as a varint and the serialized message.
|
||||
|
||||
Writes the size of the message as a varint and then the serialized message.
|
||||
This allows more data to be written to the output after the message. Use
|
||||
parse_length_prefixed to parse messages written by this method.
|
||||
|
||||
The output stream must be buffered, e.g. using
|
||||
https://docs.python.org/3/library/io.html#buffered-streams.
|
||||
|
||||
Example usage:
|
||||
out = io.BytesIO()
|
||||
for msg in message_list:
|
||||
proto.serialize_length_prefixed(msg, out)
|
||||
|
||||
Args:
|
||||
message: The protocol buffer message that should be serialized.
|
||||
output: BytesIO or custom buffered IO that data should be written to.
|
||||
"""
|
||||
size = message.ByteSize()
|
||||
encoder._VarintEncoder()(output.write, size)
|
||||
out_size = output.write(serialize(message))
|
||||
|
||||
if out_size != size:
|
||||
raise TypeError(
|
||||
'Failed to write complete message (wrote: %d, expected: %d)'
|
||||
'. Ensure output is using buffered IO.' % (out_size, size)
|
||||
)
|
||||
|
||||
|
||||
def parse_length_prefixed(
|
||||
message_class: Type[_MESSAGE], input_bytes: io.BytesIO
|
||||
) -> _MESSAGE:
|
||||
"""Parse a message from input_bytes.
|
||||
|
||||
Args:
|
||||
message_class: The protocol buffer message class that parser should parse.
|
||||
input_bytes: A buffered input.
|
||||
|
||||
Example usage:
|
||||
while True:
|
||||
msg = proto.parse_length_prefixed(message_class, input_bytes)
|
||||
if msg is None:
|
||||
break
|
||||
...
|
||||
|
||||
Returns:
|
||||
A parsed message if successful. None if input_bytes is at EOF.
|
||||
"""
|
||||
size = decoder._DecodeVarint(input_bytes)
|
||||
if size is None:
|
||||
# It is the end of buffered input. See example usage in the
|
||||
# API description.
|
||||
return None
|
||||
|
||||
message = message_class()
|
||||
|
||||
if size == 0:
|
||||
return message
|
||||
|
||||
parsed_size = message.ParseFromString(input_bytes.read(size))
|
||||
if parsed_size != size:
|
||||
raise ValueError(
|
||||
'Truncated message or non-buffered input_bytes: '
|
||||
'Expected {0} bytes but only {1} bytes parsed for '
|
||||
'{2}.'.format(size, parsed_size, message.DESCRIPTOR.name)
|
||||
)
|
||||
return message
|
||||
|
||||
|
||||
def byte_size(message: Message) -> int:
|
||||
"""Returns the serialized size of this message.
|
||||
|
||||
Args:
|
||||
message: A proto message.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes required to serialize this message.
|
||||
"""
|
||||
return message.ByteSize()
|
||||
|
||||
|
||||
def clear_message(message: Message) -> None:
|
||||
"""Clears all data that was set in the message.
|
||||
|
||||
Args:
|
||||
message: The proto message to be cleared.
|
||||
"""
|
||||
message.Clear()
|
||||
|
||||
|
||||
def clear_field(message: Message, field_name: Text) -> None:
|
||||
"""Clears the contents of a given field.
|
||||
|
||||
Inside a oneof group, clears the field set. If the name neither refers to a
|
||||
defined field or oneof group, :exc:`ValueError` is raised.
|
||||
|
||||
Args:
|
||||
message: The proto message.
|
||||
field_name (str): The name of the field to be cleared.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
message.ClearField(field_name)
|
||||
@@ -0,0 +1,111 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Dynamic Protobuf class creator."""
|
||||
|
||||
from collections import OrderedDict
|
||||
import hashlib
|
||||
import os
|
||||
|
||||
from google.protobuf import descriptor_pb2
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf import descriptor_pool
|
||||
from google.protobuf import message_factory
|
||||
|
||||
|
||||
def _GetMessageFromFactory(pool, full_name):
|
||||
"""Get a proto class from the MessageFactory by name.
|
||||
|
||||
Args:
|
||||
pool: a descriptor pool.
|
||||
full_name: str, the fully qualified name of the proto type.
|
||||
Returns:
|
||||
A class, for the type identified by full_name.
|
||||
Raises:
|
||||
KeyError, if the proto is not found in the factory's descriptor pool.
|
||||
"""
|
||||
proto_descriptor = pool.FindMessageTypeByName(full_name)
|
||||
proto_cls = message_factory.GetMessageClass(proto_descriptor)
|
||||
return proto_cls
|
||||
|
||||
|
||||
def MakeSimpleProtoClass(fields, full_name=None, pool=None):
|
||||
"""Create a Protobuf class whose fields are basic types.
|
||||
|
||||
Note: this doesn't validate field names!
|
||||
|
||||
Args:
|
||||
fields: dict of {name: field_type} mappings for each field in the proto. If
|
||||
this is an OrderedDict the order will be maintained, otherwise the
|
||||
fields will be sorted by name.
|
||||
full_name: optional str, the fully-qualified name of the proto type.
|
||||
pool: optional DescriptorPool instance.
|
||||
Returns:
|
||||
a class, the new protobuf class with a FileDescriptor.
|
||||
"""
|
||||
pool_instance = pool or descriptor_pool.DescriptorPool()
|
||||
if full_name is not None:
|
||||
try:
|
||||
proto_cls = _GetMessageFromFactory(pool_instance, full_name)
|
||||
return proto_cls
|
||||
except KeyError:
|
||||
# The factory's DescriptorPool doesn't know about this class yet.
|
||||
pass
|
||||
|
||||
# Get a list of (name, field_type) tuples from the fields dict. If fields was
|
||||
# an OrderedDict we keep the order, but otherwise we sort the field to ensure
|
||||
# consistent ordering.
|
||||
field_items = fields.items()
|
||||
if not isinstance(fields, OrderedDict):
|
||||
field_items = sorted(field_items)
|
||||
|
||||
# Use a consistent file name that is unlikely to conflict with any imported
|
||||
# proto files.
|
||||
fields_hash = hashlib.sha1()
|
||||
for f_name, f_type in field_items:
|
||||
fields_hash.update(f_name.encode('utf-8'))
|
||||
fields_hash.update(str(f_type).encode('utf-8'))
|
||||
proto_file_name = fields_hash.hexdigest() + '.proto'
|
||||
|
||||
# If the proto is anonymous, use the same hash to name it.
|
||||
if full_name is None:
|
||||
full_name = ('net.proto2.python.public.proto_builder.AnonymousProto_' +
|
||||
fields_hash.hexdigest())
|
||||
try:
|
||||
proto_cls = _GetMessageFromFactory(pool_instance, full_name)
|
||||
return proto_cls
|
||||
except KeyError:
|
||||
# The factory's DescriptorPool doesn't know about this class yet.
|
||||
pass
|
||||
|
||||
# This is the first time we see this proto: add a new descriptor to the pool.
|
||||
pool_instance.Add(
|
||||
_MakeFileDescriptorProto(proto_file_name, full_name, field_items))
|
||||
return _GetMessageFromFactory(pool_instance, full_name)
|
||||
|
||||
|
||||
def _MakeFileDescriptorProto(proto_file_name, full_name, field_items):
|
||||
"""Populate FileDescriptorProto for MessageFactory's DescriptorPool."""
|
||||
package, name = full_name.rsplit('.', 1)
|
||||
file_proto = descriptor_pb2.FileDescriptorProto()
|
||||
file_proto.name = os.path.join(package.replace('.', '/'), proto_file_name)
|
||||
file_proto.package = package
|
||||
desc_proto = file_proto.message_type.add()
|
||||
desc_proto.name = name
|
||||
for f_number, (f_name, f_type) in enumerate(field_items, 1):
|
||||
field_proto = desc_proto.field.add()
|
||||
field_proto.name = f_name
|
||||
# # If the number falls in the reserved range, reassign it to the correct
|
||||
# # number after the range.
|
||||
if f_number >= descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER:
|
||||
f_number += (
|
||||
descriptor.FieldDescriptor.LAST_RESERVED_FIELD_NUMBER -
|
||||
descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER + 1)
|
||||
field_proto.number = f_number
|
||||
field_proto.label = descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL
|
||||
field_proto.type = f_type
|
||||
return file_proto
|
||||
@@ -0,0 +1,83 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Nextgen Pythonic Protobuf JSON APIs."""
|
||||
|
||||
from typing import Optional, Type
|
||||
|
||||
from google.protobuf.message import Message
|
||||
from google.protobuf.descriptor_pool import DescriptorPool
|
||||
from google.protobuf import json_format
|
||||
|
||||
def serialize(
|
||||
message: Message,
|
||||
always_print_fields_with_no_presence: bool=False,
|
||||
preserving_proto_field_name: bool=False,
|
||||
use_integers_for_enums: bool=False,
|
||||
descriptor_pool: Optional[DescriptorPool]=None,
|
||||
float_precision: int=None,
|
||||
) -> dict:
|
||||
"""Converts protobuf message to a dictionary.
|
||||
|
||||
When the dictionary is encoded to JSON, it conforms to proto3 JSON spec.
|
||||
|
||||
Args:
|
||||
message: The protocol buffers message instance to serialize.
|
||||
always_print_fields_with_no_presence: If True, fields without
|
||||
presence (implicit presence scalars, repeated fields, and map fields) will
|
||||
always be serialized. Any field that supports presence is not affected by
|
||||
this option (including singular message fields and oneof fields).
|
||||
preserving_proto_field_name: If True, use the original proto field names as
|
||||
defined in the .proto file. If False, convert the field names to
|
||||
lowerCamelCase.
|
||||
use_integers_for_enums: If true, print integers instead of enum names.
|
||||
descriptor_pool: A Descriptor Pool for resolving types. If None use the
|
||||
default.
|
||||
float_precision: If set, use this to specify float field valid digits.
|
||||
|
||||
Returns:
|
||||
A dict representation of the protocol buffer message.
|
||||
"""
|
||||
return json_format.MessageToDict(
|
||||
message,
|
||||
always_print_fields_with_no_presence=always_print_fields_with_no_presence,
|
||||
preserving_proto_field_name=preserving_proto_field_name,
|
||||
use_integers_for_enums=use_integers_for_enums,
|
||||
float_precision=float_precision,
|
||||
)
|
||||
|
||||
def parse(
|
||||
message_class: Type[Message],
|
||||
js_dict: dict,
|
||||
ignore_unknown_fields: bool=False,
|
||||
descriptor_pool: Optional[DescriptorPool]=None,
|
||||
max_recursion_depth: int=100
|
||||
) -> Message:
|
||||
"""Parses a JSON dictionary representation into a message.
|
||||
|
||||
Args:
|
||||
message_class: The message meta class.
|
||||
js_dict: Dict representation of a JSON message.
|
||||
ignore_unknown_fields: If True, do not raise errors for unknown fields.
|
||||
descriptor_pool: A Descriptor Pool for resolving types. If None use the
|
||||
default.
|
||||
max_recursion_depth: max recursion depth of JSON message to be deserialized.
|
||||
JSON messages over this depth will fail to be deserialized. Default value
|
||||
is 100.
|
||||
|
||||
Returns:
|
||||
A new message passed from json_dict.
|
||||
"""
|
||||
new_message = message_class()
|
||||
json_format.ParseDict(
|
||||
js_dict=js_dict,
|
||||
message=new_message,
|
||||
ignore_unknown_fields=ignore_unknown_fields,
|
||||
descriptor_pool=descriptor_pool,
|
||||
max_recursion_depth=max_recursion_depth,
|
||||
)
|
||||
return new_message
|
||||
@@ -0,0 +1,129 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2025 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Nextgen Pythonic Protobuf Text Format APIs."""
|
||||
from typing import AnyStr, Callable, Optional, Text, Type, Union
|
||||
|
||||
from google.protobuf import text_format
|
||||
from google.protobuf.descriptor_pool import DescriptorPool
|
||||
from google.protobuf.message import Message
|
||||
|
||||
_MsgFormatter = Callable[[Message, Union[int, bool], bool], Optional[Text]]
|
||||
|
||||
|
||||
def serialize(
|
||||
message: Message,
|
||||
as_utf8: bool = True,
|
||||
as_one_line: bool = False,
|
||||
use_short_repeated_primitives: bool = False,
|
||||
pointy_brackets: bool = False,
|
||||
use_index_order: bool = False,
|
||||
float_format: Optional[str] = None,
|
||||
double_format: Optional[str] = None,
|
||||
use_field_number: bool = False,
|
||||
descriptor_pool: Optional[DescriptorPool] = None,
|
||||
indent: int = 0,
|
||||
message_formatter: Optional[_MsgFormatter] = None,
|
||||
print_unknown_fields: bool = False,
|
||||
force_colon: bool = False,
|
||||
) -> str:
|
||||
"""Convert protobuf message to text format.
|
||||
|
||||
Double values can be formatted compactly with 15 digits of
|
||||
precision (which is the most that IEEE 754 "double" can guarantee)
|
||||
using double_format='.15g'. To ensure that converting to text and back to a
|
||||
proto will result in an identical value, double_format='.17g' should be used.
|
||||
|
||||
Args:
|
||||
message: The protocol buffers message.
|
||||
as_utf8: Return unescaped Unicode for non-ASCII characters.
|
||||
as_one_line: Don't introduce newlines between fields.
|
||||
use_short_repeated_primitives: Use short repeated format for primitives.
|
||||
pointy_brackets: If True, use angle brackets instead of curly braces for
|
||||
nesting.
|
||||
use_index_order: If True, fields of a proto message will be printed using
|
||||
the order defined in source code instead of the field number, extensions
|
||||
will be printed at the end of the message and their relative order is
|
||||
determined by the extension number. By default, use the field number
|
||||
order.
|
||||
float_format (str): If set, use this to specify float field formatting (per
|
||||
the "Format Specification Mini-Language"); otherwise, shortest float that
|
||||
has same value in wire will be printed. Also affect double field if
|
||||
double_format is not set but float_format is set.
|
||||
double_format (str): If set, use this to specify double field formatting
|
||||
(per the "Format Specification Mini-Language"); if it is not set but
|
||||
float_format is set, use float_format. Otherwise, use ``str()``
|
||||
use_field_number: If True, print field numbers instead of names.
|
||||
descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types.
|
||||
indent (int): The initial indent level, in terms of spaces, for pretty
|
||||
print.
|
||||
message_formatter (function(message, indent, as_one_line) -> unicode|None):
|
||||
Custom formatter for selected sub-messages (usually based on message
|
||||
type). Use to pretty print parts of the protobuf for easier diffing.
|
||||
print_unknown_fields: If True, unknown fields will be printed.
|
||||
force_colon: If set, a colon will be added after the field name even if the
|
||||
field is a proto message.
|
||||
|
||||
Returns:
|
||||
str: A string of the text formatted protocol buffer message.
|
||||
"""
|
||||
return text_format.MessageToString(
|
||||
message=message,
|
||||
as_utf8=as_utf8,
|
||||
as_one_line=as_one_line,
|
||||
use_short_repeated_primitives=use_short_repeated_primitives,
|
||||
pointy_brackets=pointy_brackets,
|
||||
use_index_order=use_index_order,
|
||||
float_format=float_format,
|
||||
double_format=double_format,
|
||||
use_field_number=use_field_number,
|
||||
descriptor_pool=descriptor_pool,
|
||||
indent=indent,
|
||||
message_formatter=message_formatter,
|
||||
print_unknown_fields=print_unknown_fields,
|
||||
force_colon=force_colon,
|
||||
)
|
||||
|
||||
|
||||
def parse(
|
||||
message_class: Type[Message],
|
||||
text: AnyStr,
|
||||
allow_unknown_extension: bool = False,
|
||||
allow_field_number: bool = False,
|
||||
descriptor_pool: Optional[DescriptorPool] = None,
|
||||
allow_unknown_field: bool = False,
|
||||
) -> Message:
|
||||
"""Parses a text representation of a protocol message into a message.
|
||||
|
||||
Args:
|
||||
message_class: The message meta class.
|
||||
text (str): Message text representation.
|
||||
message (Message): A protocol buffer message to merge into.
|
||||
allow_unknown_extension: if True, skip over missing extensions and keep
|
||||
parsing
|
||||
allow_field_number: if True, both field number and field name are allowed.
|
||||
descriptor_pool (DescriptorPool): Descriptor pool used to resolve Any types.
|
||||
allow_unknown_field: if True, skip over unknown field and keep parsing.
|
||||
Avoid to use this option if possible. It may hide some errors (e.g.
|
||||
spelling error on field name)
|
||||
|
||||
Returns:
|
||||
Message: A new message passed from text.
|
||||
|
||||
Raises:
|
||||
ParseError: On text parsing problems.
|
||||
"""
|
||||
new_message = message_class()
|
||||
text_format.Parse(
|
||||
text=text,
|
||||
message=new_message,
|
||||
allow_unknown_extension=allow_unknown_extension,
|
||||
allow_field_number=allow_field_number,
|
||||
descriptor_pool=descriptor_pool,
|
||||
allow_unknown_field=allow_unknown_field,
|
||||
)
|
||||
return new_message
|
||||
+49
@@ -0,0 +1,49 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Protocol message implementation hooks for C++ implementation.
|
||||
|
||||
Contains helper functions used to create protocol message classes from
|
||||
Descriptor objects at runtime backed by the protocol buffer C++ API.
|
||||
"""
|
||||
|
||||
__author__ = 'tibell@google.com (Johan Tibell)'
|
||||
|
||||
from google.protobuf.internal import api_implementation
|
||||
|
||||
|
||||
# pylint: disable=protected-access
|
||||
_message = api_implementation._c_module
|
||||
# TODO: Remove this import after fix api_implementation
|
||||
if _message is None:
|
||||
from google.protobuf.pyext import _message
|
||||
|
||||
|
||||
class GeneratedProtocolMessageType(_message.MessageMeta):
|
||||
|
||||
"""Metaclass for protocol message classes created at runtime from Descriptors.
|
||||
|
||||
The protocol compiler currently uses this metaclass to create protocol
|
||||
message classes at runtime. Clients can also manually create their own
|
||||
classes at runtime, as in this example:
|
||||
|
||||
mydescriptor = Descriptor(.....)
|
||||
factory = symbol_database.Default()
|
||||
factory.pool.AddDescriptor(mydescriptor)
|
||||
MyProtoClass = message_factory.GetMessageClass(mydescriptor)
|
||||
myproto_instance = MyProtoClass()
|
||||
myproto.foo_field = 23
|
||||
...
|
||||
|
||||
The above example will not work for nested types. If you wish to include them,
|
||||
use reflection.MakeClass() instead of manually instantiating the class in
|
||||
order to create the appropriate class structure.
|
||||
"""
|
||||
|
||||
# Must be consistent with the protocol-compiler code in
|
||||
# proto2/compiler/internal/generator.*.
|
||||
_DESCRIPTOR_KEY = 'DESCRIPTOR'
|
||||
@@ -0,0 +1,36 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
# This code is meant to work on Python 2.4 and above only.
|
||||
|
||||
"""Contains a metaclass and helper functions used to create
|
||||
protocol message classes from Descriptor objects at runtime.
|
||||
|
||||
Recall that a metaclass is the "type" of a class.
|
||||
(A class is to a metaclass what an instance is to a class.)
|
||||
|
||||
In this case, we use the GeneratedProtocolMessageType metaclass
|
||||
to inject all the useful functionality into the classes
|
||||
output by the protocol compiler at compile-time.
|
||||
|
||||
The upshot of all this is that the real implementation
|
||||
details for ALL pure-Python protocol buffers are *here in
|
||||
this file*.
|
||||
"""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
import warnings
|
||||
|
||||
from google.protobuf import message_factory
|
||||
from google.protobuf import symbol_database
|
||||
|
||||
# The type of all Message classes.
|
||||
# Part of the public interface, but normally only used by message factories.
|
||||
GeneratedProtocolMessageType = message_factory._GENERATED_PROTOCOL_MESSAGE_TYPE
|
||||
|
||||
MESSAGE_CLASS_CACHE = {}
|
||||
+104
@@ -0,0 +1,104 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Protobuf Runtime versions and validators.
|
||||
|
||||
It should only be accessed by Protobuf gencodes and tests. DO NOT USE it
|
||||
elsewhere.
|
||||
"""
|
||||
|
||||
__author__ = 'shaod@google.com (Dennis Shao)'
|
||||
|
||||
from enum import Enum
|
||||
import os
|
||||
import warnings
|
||||
|
||||
|
||||
class Domain(Enum):
|
||||
GOOGLE_INTERNAL = 1
|
||||
PUBLIC = 2
|
||||
|
||||
|
||||
# The versions of this Python Protobuf runtime to be changed automatically by
|
||||
# the Protobuf release process. Do not edit them manually.
|
||||
# These OSS versions are not stripped to avoid merging conflicts.
|
||||
OSS_DOMAIN = Domain.PUBLIC
|
||||
OSS_MAJOR = 6
|
||||
OSS_MINOR = 33
|
||||
OSS_PATCH = 5
|
||||
OSS_SUFFIX = ''
|
||||
|
||||
DOMAIN = OSS_DOMAIN
|
||||
MAJOR = OSS_MAJOR
|
||||
MINOR = OSS_MINOR
|
||||
PATCH = OSS_PATCH
|
||||
SUFFIX = OSS_SUFFIX
|
||||
|
||||
# Avoid flooding of warnings.
|
||||
_MAX_WARNING_COUNT = 20
|
||||
_warning_count = 0
|
||||
|
||||
class VersionError(Exception):
|
||||
"""Exception class for version violation."""
|
||||
|
||||
|
||||
def _ReportVersionError(msg):
|
||||
raise VersionError(msg)
|
||||
|
||||
|
||||
def ValidateProtobufRuntimeVersion(
|
||||
gen_domain, gen_major, gen_minor, gen_patch, gen_suffix, location
|
||||
):
|
||||
"""Function to validate versions.
|
||||
|
||||
Args:
|
||||
gen_domain: The domain where the code was generated from.
|
||||
gen_major: The major version number of the gencode.
|
||||
gen_minor: The minor version number of the gencode.
|
||||
gen_patch: The patch version number of the gencode.
|
||||
gen_suffix: The version suffix e.g. '-dev', '-rc1' of the gencode.
|
||||
location: The proto location that causes the version violation.
|
||||
|
||||
Raises:
|
||||
VersionError: if gencode version is invalid or incompatible with the
|
||||
runtime.
|
||||
"""
|
||||
|
||||
disable_flag = os.getenv('TEMPORARILY_DISABLE_PROTOBUF_VERSION_CHECK')
|
||||
if disable_flag is not None and disable_flag.lower() == 'true':
|
||||
return
|
||||
|
||||
global _warning_count
|
||||
|
||||
version = f'{MAJOR}.{MINOR}.{PATCH}{SUFFIX}'
|
||||
gen_version = f'{gen_major}.{gen_minor}.{gen_patch}{gen_suffix}'
|
||||
|
||||
if gen_major < 0 or gen_minor < 0 or gen_patch < 0:
|
||||
raise VersionError(f'Invalid gencode version: {gen_version}')
|
||||
|
||||
error_prompt = (
|
||||
'See Protobuf version guarantees at'
|
||||
' https://protobuf.dev/support/cross-version-runtime-guarantee.'
|
||||
)
|
||||
|
||||
if gen_domain != DOMAIN:
|
||||
_ReportVersionError(
|
||||
'Detected mismatched Protobuf Gencode/Runtime domains when loading'
|
||||
f' {location}: gencode {gen_domain.name} runtime {DOMAIN.name}.'
|
||||
' Cross-domain usage of Protobuf is not supported.'
|
||||
)
|
||||
|
||||
if (
|
||||
MAJOR < gen_major
|
||||
or (MAJOR == gen_major and MINOR < gen_minor)
|
||||
or (MAJOR == gen_major and MINOR == gen_minor and PATCH < gen_patch)
|
||||
):
|
||||
_ReportVersionError(
|
||||
'Detected incompatible Protobuf Gencode/Runtime versions when loading'
|
||||
f' {location}: gencode {gen_version} runtime {version}. Runtime version'
|
||||
f' cannot be older than the linked gencode version. {error_prompt}'
|
||||
)
|
||||
+272
@@ -0,0 +1,272 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains metaclasses used to create protocol service and service stub
|
||||
classes from ServiceDescriptor objects at runtime.
|
||||
|
||||
The GeneratedServiceType and GeneratedServiceStubType metaclasses are used to
|
||||
inject all useful functionality into the classes output by the protocol
|
||||
compiler at compile-time.
|
||||
"""
|
||||
|
||||
__author__ = 'petar@google.com (Petar Petrov)'
|
||||
|
||||
|
||||
class GeneratedServiceType(type):
|
||||
|
||||
"""Metaclass for service classes created at runtime from ServiceDescriptors.
|
||||
|
||||
Implementations for all methods described in the Service class are added here
|
||||
by this class. We also create properties to allow getting/setting all fields
|
||||
in the protocol message.
|
||||
|
||||
The protocol compiler currently uses this metaclass to create protocol service
|
||||
classes at runtime. Clients can also manually create their own classes at
|
||||
runtime, as in this example::
|
||||
|
||||
mydescriptor = ServiceDescriptor(.....)
|
||||
class MyProtoService(service.Service):
|
||||
__metaclass__ = GeneratedServiceType
|
||||
DESCRIPTOR = mydescriptor
|
||||
myservice_instance = MyProtoService()
|
||||
# ...
|
||||
"""
|
||||
|
||||
_DESCRIPTOR_KEY = 'DESCRIPTOR'
|
||||
|
||||
def __init__(cls, name, bases, dictionary):
|
||||
"""Creates a message service class.
|
||||
|
||||
Args:
|
||||
name: Name of the class (ignored, but required by the metaclass
|
||||
protocol).
|
||||
bases: Base classes of the class being constructed.
|
||||
dictionary: The class dictionary of the class being constructed.
|
||||
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
|
||||
describing this protocol service type.
|
||||
"""
|
||||
# Don't do anything if this class doesn't have a descriptor. This happens
|
||||
# when a service class is subclassed.
|
||||
if GeneratedServiceType._DESCRIPTOR_KEY not in dictionary:
|
||||
return
|
||||
|
||||
descriptor = dictionary[GeneratedServiceType._DESCRIPTOR_KEY]
|
||||
service_builder = _ServiceBuilder(descriptor)
|
||||
service_builder.BuildService(cls)
|
||||
cls.DESCRIPTOR = descriptor
|
||||
|
||||
|
||||
class GeneratedServiceStubType(GeneratedServiceType):
|
||||
|
||||
"""Metaclass for service stubs created at runtime from ServiceDescriptors.
|
||||
|
||||
This class has similar responsibilities as GeneratedServiceType, except that
|
||||
it creates the service stub classes.
|
||||
"""
|
||||
|
||||
_DESCRIPTOR_KEY = 'DESCRIPTOR'
|
||||
|
||||
def __init__(cls, name, bases, dictionary):
|
||||
"""Creates a message service stub class.
|
||||
|
||||
Args:
|
||||
name: Name of the class (ignored, here).
|
||||
bases: Base classes of the class being constructed.
|
||||
dictionary: The class dictionary of the class being constructed.
|
||||
dictionary[_DESCRIPTOR_KEY] must contain a ServiceDescriptor object
|
||||
describing this protocol service type.
|
||||
"""
|
||||
super(GeneratedServiceStubType, cls).__init__(name, bases, dictionary)
|
||||
# Don't do anything if this class doesn't have a descriptor. This happens
|
||||
# when a service stub is subclassed.
|
||||
if GeneratedServiceStubType._DESCRIPTOR_KEY not in dictionary:
|
||||
return
|
||||
|
||||
descriptor = dictionary[GeneratedServiceStubType._DESCRIPTOR_KEY]
|
||||
service_stub_builder = _ServiceStubBuilder(descriptor)
|
||||
service_stub_builder.BuildServiceStub(cls)
|
||||
|
||||
|
||||
class _ServiceBuilder(object):
|
||||
|
||||
"""This class constructs a protocol service class using a service descriptor.
|
||||
|
||||
Given a service descriptor, this class constructs a class that represents
|
||||
the specified service descriptor. One service builder instance constructs
|
||||
exactly one service class. That means all instances of that class share the
|
||||
same builder.
|
||||
"""
|
||||
|
||||
def __init__(self, service_descriptor):
|
||||
"""Initializes an instance of the service class builder.
|
||||
|
||||
Args:
|
||||
service_descriptor: ServiceDescriptor to use when constructing the
|
||||
service class.
|
||||
"""
|
||||
self.descriptor = service_descriptor
|
||||
|
||||
def BuildService(builder, cls):
|
||||
"""Constructs the service class.
|
||||
|
||||
Args:
|
||||
cls: The class that will be constructed.
|
||||
"""
|
||||
|
||||
# CallMethod needs to operate with an instance of the Service class. This
|
||||
# internal wrapper function exists only to be able to pass the service
|
||||
# instance to the method that does the real CallMethod work.
|
||||
# Making sure to use exact argument names from the abstract interface in
|
||||
# service.py to match the type signature
|
||||
def _WrapCallMethod(self, method_descriptor, rpc_controller, request, done):
|
||||
return builder._CallMethod(self, method_descriptor, rpc_controller,
|
||||
request, done)
|
||||
|
||||
def _WrapGetRequestClass(self, method_descriptor):
|
||||
return builder._GetRequestClass(method_descriptor)
|
||||
|
||||
def _WrapGetResponseClass(self, method_descriptor):
|
||||
return builder._GetResponseClass(method_descriptor)
|
||||
|
||||
builder.cls = cls
|
||||
cls.CallMethod = _WrapCallMethod
|
||||
cls.GetDescriptor = staticmethod(lambda: builder.descriptor)
|
||||
cls.GetDescriptor.__doc__ = 'Returns the service descriptor.'
|
||||
cls.GetRequestClass = _WrapGetRequestClass
|
||||
cls.GetResponseClass = _WrapGetResponseClass
|
||||
for method in builder.descriptor.methods:
|
||||
setattr(cls, method.name, builder._GenerateNonImplementedMethod(method))
|
||||
|
||||
def _CallMethod(self, srvc, method_descriptor,
|
||||
rpc_controller, request, callback):
|
||||
"""Calls the method described by a given method descriptor.
|
||||
|
||||
Args:
|
||||
srvc: Instance of the service for which this method is called.
|
||||
method_descriptor: Descriptor that represent the method to call.
|
||||
rpc_controller: RPC controller to use for this method's execution.
|
||||
request: Request protocol message.
|
||||
callback: A callback to invoke after the method has completed.
|
||||
"""
|
||||
if method_descriptor.containing_service != self.descriptor:
|
||||
raise RuntimeError(
|
||||
'CallMethod() given method descriptor for wrong service type.')
|
||||
method = getattr(srvc, method_descriptor.name)
|
||||
return method(rpc_controller, request, callback)
|
||||
|
||||
def _GetRequestClass(self, method_descriptor):
|
||||
"""Returns the class of the request protocol message.
|
||||
|
||||
Args:
|
||||
method_descriptor: Descriptor of the method for which to return the
|
||||
request protocol message class.
|
||||
|
||||
Returns:
|
||||
A class that represents the input protocol message of the specified
|
||||
method.
|
||||
"""
|
||||
if method_descriptor.containing_service != self.descriptor:
|
||||
raise RuntimeError(
|
||||
'GetRequestClass() given method descriptor for wrong service type.')
|
||||
return method_descriptor.input_type._concrete_class
|
||||
|
||||
def _GetResponseClass(self, method_descriptor):
|
||||
"""Returns the class of the response protocol message.
|
||||
|
||||
Args:
|
||||
method_descriptor: Descriptor of the method for which to return the
|
||||
response protocol message class.
|
||||
|
||||
Returns:
|
||||
A class that represents the output protocol message of the specified
|
||||
method.
|
||||
"""
|
||||
if method_descriptor.containing_service != self.descriptor:
|
||||
raise RuntimeError(
|
||||
'GetResponseClass() given method descriptor for wrong service type.')
|
||||
return method_descriptor.output_type._concrete_class
|
||||
|
||||
def _GenerateNonImplementedMethod(self, method):
|
||||
"""Generates and returns a method that can be set for a service methods.
|
||||
|
||||
Args:
|
||||
method: Descriptor of the service method for which a method is to be
|
||||
generated.
|
||||
|
||||
Returns:
|
||||
A method that can be added to the service class.
|
||||
"""
|
||||
return lambda inst, rpc_controller, request, callback: (
|
||||
self._NonImplementedMethod(method.name, rpc_controller, callback))
|
||||
|
||||
def _NonImplementedMethod(self, method_name, rpc_controller, callback):
|
||||
"""The body of all methods in the generated service class.
|
||||
|
||||
Args:
|
||||
method_name: Name of the method being executed.
|
||||
rpc_controller: RPC controller used to execute this method.
|
||||
callback: A callback which will be invoked when the method finishes.
|
||||
"""
|
||||
rpc_controller.SetFailed('Method %s not implemented.' % method_name)
|
||||
callback(None)
|
||||
|
||||
|
||||
class _ServiceStubBuilder(object):
|
||||
|
||||
"""Constructs a protocol service stub class using a service descriptor.
|
||||
|
||||
Given a service descriptor, this class constructs a suitable stub class.
|
||||
A stub is just a type-safe wrapper around an RpcChannel which emulates a
|
||||
local implementation of the service.
|
||||
|
||||
One service stub builder instance constructs exactly one class. It means all
|
||||
instances of that class share the same service stub builder.
|
||||
"""
|
||||
|
||||
def __init__(self, service_descriptor):
|
||||
"""Initializes an instance of the service stub class builder.
|
||||
|
||||
Args:
|
||||
service_descriptor: ServiceDescriptor to use when constructing the
|
||||
stub class.
|
||||
"""
|
||||
self.descriptor = service_descriptor
|
||||
|
||||
def BuildServiceStub(self, cls):
|
||||
"""Constructs the stub class.
|
||||
|
||||
Args:
|
||||
cls: The class that will be constructed.
|
||||
"""
|
||||
|
||||
def _ServiceStubInit(stub, rpc_channel):
|
||||
stub.rpc_channel = rpc_channel
|
||||
self.cls = cls
|
||||
cls.__init__ = _ServiceStubInit
|
||||
for method in self.descriptor.methods:
|
||||
setattr(cls, method.name, self._GenerateStubMethod(method))
|
||||
|
||||
def _GenerateStubMethod(self, method):
|
||||
return (lambda inst, rpc_controller, request, callback=None:
|
||||
self._StubMethod(inst, method, rpc_controller, request, callback))
|
||||
|
||||
def _StubMethod(self, stub, method_descriptor,
|
||||
rpc_controller, request, callback):
|
||||
"""The body of all service methods in the generated stub class.
|
||||
|
||||
Args:
|
||||
stub: Stub instance.
|
||||
method_descriptor: Descriptor of the invoked method.
|
||||
rpc_controller: Rpc controller to execute the method.
|
||||
request: Request protocol message.
|
||||
callback: A callback to execute when the method finishes.
|
||||
Returns:
|
||||
Response message (in case of blocking call).
|
||||
"""
|
||||
return stub.rpc_channel.CallMethod(
|
||||
method_descriptor, rpc_controller, request,
|
||||
method_descriptor.output_type._concrete_class, callback)
|
||||
+37
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/source_context.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/source_context.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n$google/protobuf/source_context.proto\x12\x0fgoogle.protobuf\",\n\rSourceContext\x12\x1b\n\tfile_name\x18\x01 \x01(\tR\x08\x66ileNameB\x8a\x01\n\x13\x63om.google.protobufB\x12SourceContextProtoP\x01Z6google.golang.org/protobuf/types/known/sourcecontextpb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.source_context_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\022SourceContextProtoP\001Z6google.golang.org/protobuf/types/known/sourcecontextpb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_SOURCECONTEXT']._serialized_start=57
|
||||
_globals['_SOURCECONTEXT']._serialized_end=101
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,47 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/struct.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/struct.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1cgoogle/protobuf/struct.proto\x12\x0fgoogle.protobuf\"\x98\x01\n\x06Struct\x12;\n\x06\x66ields\x18\x01 \x03(\x0b\x32#.google.protobuf.Struct.FieldsEntryR\x06\x66ields\x1aQ\n\x0b\x46ieldsEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x12,\n\x05value\x18\x02 \x01(\x0b\x32\x16.google.protobuf.ValueR\x05value:\x02\x38\x01\"\xb2\x02\n\x05Value\x12;\n\nnull_value\x18\x01 \x01(\x0e\x32\x1a.google.protobuf.NullValueH\x00R\tnullValue\x12#\n\x0cnumber_value\x18\x02 \x01(\x01H\x00R\x0bnumberValue\x12#\n\x0cstring_value\x18\x03 \x01(\tH\x00R\x0bstringValue\x12\x1f\n\nbool_value\x18\x04 \x01(\x08H\x00R\tboolValue\x12<\n\x0cstruct_value\x18\x05 \x01(\x0b\x32\x17.google.protobuf.StructH\x00R\x0bstructValue\x12;\n\nlist_value\x18\x06 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x00R\tlistValueB\x06\n\x04kind\";\n\tListValue\x12.\n\x06values\x18\x01 \x03(\x0b\x32\x16.google.protobuf.ValueR\x06values*\x1b\n\tNullValue\x12\x0e\n\nNULL_VALUE\x10\x00\x42\x7f\n\x13\x63om.google.protobufB\x0bStructProtoP\x01Z/google.golang.org/protobuf/types/known/structpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.struct_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\013StructProtoP\001Z/google.golang.org/protobuf/types/known/structpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_STRUCT_FIELDSENTRY']._loaded_options = None
|
||||
_globals['_STRUCT_FIELDSENTRY']._serialized_options = b'8\001'
|
||||
_globals['_NULLVALUE']._serialized_start=574
|
||||
_globals['_NULLVALUE']._serialized_end=601
|
||||
_globals['_STRUCT']._serialized_start=50
|
||||
_globals['_STRUCT']._serialized_end=202
|
||||
_globals['_STRUCT_FIELDSENTRY']._serialized_start=121
|
||||
_globals['_STRUCT_FIELDSENTRY']._serialized_end=202
|
||||
_globals['_VALUE']._serialized_start=205
|
||||
_globals['_VALUE']._serialized_end=511
|
||||
_globals['_LISTVALUE']._serialized_start=513
|
||||
_globals['_LISTVALUE']._serialized_end=572
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""A database of Python protocol buffer generated symbols.
|
||||
|
||||
SymbolDatabase is the MessageFactory for messages generated at compile time,
|
||||
and makes it easy to create new instances of a registered type, given only the
|
||||
type's protocol buffer symbol name.
|
||||
|
||||
Example usage::
|
||||
|
||||
db = symbol_database.SymbolDatabase()
|
||||
|
||||
# Register symbols of interest, from one or multiple files.
|
||||
db.RegisterFileDescriptor(my_proto_pb2.DESCRIPTOR)
|
||||
db.RegisterMessage(my_proto_pb2.MyMessage)
|
||||
db.RegisterEnumDescriptor(my_proto_pb2.MyEnum.DESCRIPTOR)
|
||||
|
||||
# The database can be used as a MessageFactory, to generate types based on
|
||||
# their name:
|
||||
types = db.GetMessages(['my_proto.proto'])
|
||||
my_message_instance = types['MyMessage']()
|
||||
|
||||
# The database's underlying descriptor pool can be queried, so it's not
|
||||
# necessary to know a type's filename to be able to generate it:
|
||||
filename = db.pool.FindFileContainingSymbol('MyMessage')
|
||||
my_message_instance = db.GetMessages([filename])['MyMessage']()
|
||||
|
||||
# This functionality is also provided directly via a convenience method:
|
||||
my_message_instance = db.GetSymbol('MyMessage')()
|
||||
"""
|
||||
|
||||
import warnings
|
||||
|
||||
from google.protobuf.internal import api_implementation
|
||||
from google.protobuf import descriptor_pool
|
||||
from google.protobuf import message_factory
|
||||
|
||||
|
||||
class SymbolDatabase():
|
||||
"""A database of Python generated symbols."""
|
||||
|
||||
# local cache of registered classes.
|
||||
_classes = {}
|
||||
|
||||
def __init__(self, pool=None):
|
||||
"""Initializes a new SymbolDatabase."""
|
||||
self.pool = pool or descriptor_pool.DescriptorPool()
|
||||
|
||||
def RegisterMessage(self, message):
|
||||
"""Registers the given message type in the local database.
|
||||
|
||||
Calls to GetSymbol() and GetMessages() will return messages registered here.
|
||||
|
||||
Args:
|
||||
message: A :class:`google.protobuf.message.Message` subclass (or
|
||||
instance); its descriptor will be registered.
|
||||
|
||||
Returns:
|
||||
The provided message.
|
||||
"""
|
||||
|
||||
desc = message.DESCRIPTOR
|
||||
self._classes[desc] = message
|
||||
self.RegisterMessageDescriptor(desc)
|
||||
return message
|
||||
|
||||
def RegisterMessageDescriptor(self, message_descriptor):
|
||||
"""Registers the given message descriptor in the local database.
|
||||
|
||||
Args:
|
||||
message_descriptor (Descriptor): the message descriptor to add.
|
||||
"""
|
||||
if api_implementation.Type() == 'python':
|
||||
# pylint: disable=protected-access
|
||||
self.pool._AddDescriptor(message_descriptor)
|
||||
|
||||
def RegisterEnumDescriptor(self, enum_descriptor):
|
||||
"""Registers the given enum descriptor in the local database.
|
||||
|
||||
Args:
|
||||
enum_descriptor (EnumDescriptor): The enum descriptor to register.
|
||||
|
||||
Returns:
|
||||
EnumDescriptor: The provided descriptor.
|
||||
"""
|
||||
if api_implementation.Type() == 'python':
|
||||
# pylint: disable=protected-access
|
||||
self.pool._AddEnumDescriptor(enum_descriptor)
|
||||
return enum_descriptor
|
||||
|
||||
def RegisterServiceDescriptor(self, service_descriptor):
|
||||
"""Registers the given service descriptor in the local database.
|
||||
|
||||
Args:
|
||||
service_descriptor (ServiceDescriptor): the service descriptor to
|
||||
register.
|
||||
"""
|
||||
if api_implementation.Type() == 'python':
|
||||
# pylint: disable=protected-access
|
||||
self.pool._AddServiceDescriptor(service_descriptor)
|
||||
|
||||
def RegisterFileDescriptor(self, file_descriptor):
|
||||
"""Registers the given file descriptor in the local database.
|
||||
|
||||
Args:
|
||||
file_descriptor (FileDescriptor): The file descriptor to register.
|
||||
"""
|
||||
if api_implementation.Type() == 'python':
|
||||
# pylint: disable=protected-access
|
||||
self.pool._InternalAddFileDescriptor(file_descriptor)
|
||||
|
||||
def GetSymbol(self, symbol):
|
||||
"""Tries to find a symbol in the local database.
|
||||
|
||||
Currently, this method only returns message.Message instances, however, if
|
||||
may be extended in future to support other symbol types.
|
||||
|
||||
Args:
|
||||
symbol (str): a protocol buffer symbol.
|
||||
|
||||
Returns:
|
||||
A Python class corresponding to the symbol.
|
||||
|
||||
Raises:
|
||||
KeyError: if the symbol could not be found.
|
||||
"""
|
||||
|
||||
return self._classes[self.pool.FindMessageTypeByName(symbol)]
|
||||
|
||||
def GetMessages(self, files):
|
||||
# TODO: Fix the differences with MessageFactory.
|
||||
"""Gets all registered messages from a specified file.
|
||||
|
||||
Only messages already created and registered will be returned; (this is the
|
||||
case for imported _pb2 modules)
|
||||
But unlike MessageFactory, this version also returns already defined nested
|
||||
messages, but does not register any message extensions.
|
||||
|
||||
Args:
|
||||
files (list[str]): The file names to extract messages from.
|
||||
|
||||
Returns:
|
||||
A dictionary mapping proto names to the message classes.
|
||||
|
||||
Raises:
|
||||
KeyError: if a file could not be found.
|
||||
"""
|
||||
|
||||
def _GetAllMessages(desc):
|
||||
"""Walk a message Descriptor and recursively yields all message names."""
|
||||
yield desc
|
||||
for msg_desc in desc.nested_types:
|
||||
for nested_desc in _GetAllMessages(msg_desc):
|
||||
yield nested_desc
|
||||
|
||||
result = {}
|
||||
for file_name in files:
|
||||
file_desc = self.pool.FindFileByName(file_name)
|
||||
for msg_desc in file_desc.message_types_by_name.values():
|
||||
for desc in _GetAllMessages(msg_desc):
|
||||
try:
|
||||
result[desc.full_name] = self._classes[desc]
|
||||
except KeyError:
|
||||
# This descriptor has no registered class, skip it.
|
||||
pass
|
||||
return result
|
||||
|
||||
|
||||
_DEFAULT = SymbolDatabase(pool=descriptor_pool.Default())
|
||||
|
||||
|
||||
def Default():
|
||||
"""Returns the default SymbolDatabase."""
|
||||
return _DEFAULT
|
||||
@@ -0,0 +1,106 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Encoding related utilities."""
|
||||
import re
|
||||
|
||||
def _AsciiIsPrint(i):
|
||||
return i >= 32 and i < 127
|
||||
|
||||
def _MakeStrEscapes():
|
||||
ret = {}
|
||||
for i in range(0, 128):
|
||||
if not _AsciiIsPrint(i):
|
||||
ret[i] = r'\%03o' % i
|
||||
ret[ord('\t')] = r'\t' # optional escape
|
||||
ret[ord('\n')] = r'\n' # optional escape
|
||||
ret[ord('\r')] = r'\r' # optional escape
|
||||
ret[ord('"')] = r'\"' # necessary escape
|
||||
ret[ord('\'')] = r"\'" # optional escape
|
||||
ret[ord('\\')] = r'\\' # necessary escape
|
||||
return ret
|
||||
|
||||
# Maps int -> char, performing string escapes.
|
||||
_str_escapes = _MakeStrEscapes()
|
||||
|
||||
# Maps int -> char, performing byte escaping and string escapes
|
||||
_byte_escapes = {i: chr(i) for i in range(0, 256)}
|
||||
_byte_escapes.update(_str_escapes)
|
||||
_byte_escapes.update({i: r'\%03o' % i for i in range(128, 256)})
|
||||
|
||||
|
||||
def _DecodeUtf8EscapeErrors(text_bytes):
|
||||
ret = ''
|
||||
while text_bytes:
|
||||
try:
|
||||
ret += text_bytes.decode('utf-8').translate(_str_escapes)
|
||||
text_bytes = ''
|
||||
except UnicodeDecodeError as e:
|
||||
ret += text_bytes[:e.start].decode('utf-8').translate(_str_escapes)
|
||||
ret += _byte_escapes[text_bytes[e.start]]
|
||||
text_bytes = text_bytes[e.start+1:]
|
||||
return ret
|
||||
|
||||
|
||||
def CEscape(text, as_utf8) -> str:
|
||||
"""Escape a bytes string for use in an text protocol buffer.
|
||||
|
||||
Args:
|
||||
text: A byte string to be escaped.
|
||||
as_utf8: Specifies if result may contain non-ASCII characters.
|
||||
In Python 3 this allows unescaped non-ASCII Unicode characters.
|
||||
In Python 2 the return value will be valid UTF-8 rather than only ASCII.
|
||||
Returns:
|
||||
Escaped string (str).
|
||||
"""
|
||||
# Python's text.encode() 'string_escape' or 'unicode_escape' codecs do not
|
||||
# satisfy our needs; they encodes unprintable characters using two-digit hex
|
||||
# escapes whereas our C++ unescaping function allows hex escapes to be any
|
||||
# length. So, "\0011".encode('string_escape') ends up being "\\x011", which
|
||||
# will be decoded in C++ as a single-character string with char code 0x11.
|
||||
text_is_unicode = isinstance(text, str)
|
||||
if as_utf8:
|
||||
if text_is_unicode:
|
||||
return text.translate(_str_escapes)
|
||||
else:
|
||||
return _DecodeUtf8EscapeErrors(text)
|
||||
else:
|
||||
if text_is_unicode:
|
||||
text = text.encode('utf-8')
|
||||
return ''.join([_byte_escapes[c] for c in text])
|
||||
|
||||
|
||||
_CUNESCAPE_HEX = re.compile(r'(\\+)x([0-9a-fA-F])(?![0-9a-fA-F])')
|
||||
|
||||
|
||||
def CUnescape(text: str) -> bytes:
|
||||
"""Unescape a text string with C-style escape sequences to UTF-8 bytes.
|
||||
|
||||
Args:
|
||||
text: The data to parse in a str.
|
||||
Returns:
|
||||
A byte string.
|
||||
"""
|
||||
|
||||
def ReplaceHex(m):
|
||||
# Only replace the match if the number of leading back slashes is odd. i.e.
|
||||
# the slash itself is not escaped.
|
||||
if len(m.group(1)) & 1:
|
||||
return m.group(1) + 'x0' + m.group(2)
|
||||
return m.group(0)
|
||||
|
||||
# This is required because the 'string_escape' encoding doesn't
|
||||
# allow single-digit hex escapes (like '\xf').
|
||||
result = _CUNESCAPE_HEX.sub(ReplaceHex, text)
|
||||
|
||||
# Replaces Unicode escape sequences with their character equivalents.
|
||||
result = result.encode('raw_unicode_escape').decode('raw_unicode_escape')
|
||||
# Encode Unicode characters as UTF-8, then decode to Latin-1 escaping
|
||||
# unprintable characters.
|
||||
result = result.encode('utf-8').decode('unicode_escape')
|
||||
# Convert Latin-1 text back to a byte string (latin-1 codec also works here).
|
||||
return result.encode('latin-1')
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,112 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Timestamp helper APIs."""
|
||||
|
||||
import datetime
|
||||
from typing import Optional
|
||||
|
||||
from google.protobuf.timestamp_pb2 import Timestamp
|
||||
|
||||
|
||||
def from_json_string(value: str) -> Timestamp:
|
||||
"""Parse a RFC 3339 date string format to Timestamp.
|
||||
|
||||
Args:
|
||||
value: A date string. Any fractional digits (or none) and any offset are
|
||||
accepted as long as they fit into nano-seconds precision. Example of
|
||||
accepted format: '1972-01-01T10:00:20.021-05:00'
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromJsonString(value)
|
||||
return timestamp
|
||||
|
||||
|
||||
def from_microseconds(micros: float) -> Timestamp:
|
||||
"""Converts microseconds since epoch to Timestamp."""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromMicroseconds(micros)
|
||||
return timestamp
|
||||
|
||||
|
||||
def from_milliseconds(millis: float) -> Timestamp:
|
||||
"""Converts milliseconds since epoch to Timestamp."""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromMilliseconds(millis)
|
||||
return timestamp
|
||||
|
||||
|
||||
def from_nanoseconds(nanos: float) -> Timestamp:
|
||||
"""Converts nanoseconds since epoch to Timestamp."""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromNanoseconds(nanos)
|
||||
return timestamp
|
||||
|
||||
|
||||
def from_seconds(seconds: float) -> Timestamp:
|
||||
"""Converts seconds since epoch to Timestamp."""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromSeconds(seconds)
|
||||
return timestamp
|
||||
|
||||
|
||||
def from_current_time() -> Timestamp:
|
||||
"""Converts the current UTC to Timestamp."""
|
||||
timestamp = Timestamp()
|
||||
timestamp.FromDatetime(datetime.datetime.now(tz=datetime.timezone.utc))
|
||||
return timestamp
|
||||
|
||||
|
||||
def to_json_string(ts: Timestamp) -> str:
|
||||
"""Converts Timestamp to RFC 3339 date string format.
|
||||
|
||||
Returns:
|
||||
A string converted from timestamp. The string is always Z-normalized
|
||||
and uses 3, 6 or 9 fractional digits as required to represent the
|
||||
exact time. Example of the return format: '1972-01-01T10:00:20.021Z'
|
||||
"""
|
||||
return ts.ToJsonString()
|
||||
|
||||
|
||||
def to_microseconds(ts: Timestamp) -> int:
|
||||
"""Converts Timestamp to microseconds since epoch."""
|
||||
return ts.ToMicroseconds()
|
||||
|
||||
|
||||
def to_milliseconds(ts: Timestamp) -> int:
|
||||
"""Converts Timestamp to milliseconds since epoch."""
|
||||
return ts.ToMilliseconds()
|
||||
|
||||
|
||||
def to_nanoseconds(ts: Timestamp) -> int:
|
||||
"""Converts Timestamp to nanoseconds since epoch."""
|
||||
return ts.ToNanoseconds()
|
||||
|
||||
|
||||
def to_seconds(ts: Timestamp) -> int:
|
||||
"""Converts Timestamp to seconds since epoch."""
|
||||
return ts.ToSeconds()
|
||||
|
||||
|
||||
def to_datetime(
|
||||
ts: Timestamp, tz: Optional[datetime.tzinfo] = None
|
||||
) -> datetime.datetime:
|
||||
"""Converts Timestamp to a datetime.
|
||||
|
||||
Args:
|
||||
tz: A datetime.tzinfo subclass; defaults to None.
|
||||
|
||||
Returns:
|
||||
If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone
|
||||
information, i.e. not aware that it's UTC).
|
||||
|
||||
Otherwise, returns a timezone-aware datetime in the input timezone.
|
||||
"""
|
||||
return ts.ToDatetime(tzinfo=tz)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/timestamp.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/timestamp.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1fgoogle/protobuf/timestamp.proto\x12\x0fgoogle.protobuf\";\n\tTimestamp\x12\x18\n\x07seconds\x18\x01 \x01(\x03R\x07seconds\x12\x14\n\x05nanos\x18\x02 \x01(\x05R\x05nanosB\x85\x01\n\x13\x63om.google.protobufB\x0eTimestampProtoP\x01Z2google.golang.org/protobuf/types/known/timestamppb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.timestamp_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\016TimestampProtoP\001Z2google.golang.org/protobuf/types/known/timestamppb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_TIMESTAMP']._serialized_start=52
|
||||
_globals['_TIMESTAMP']._serialized_end=111
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,53 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/type.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/type.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2
|
||||
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1agoogle/protobuf/type.proto\x12\x0fgoogle.protobuf\x1a\x19google/protobuf/any.proto\x1a$google/protobuf/source_context.proto\"\xa7\x02\n\x04Type\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12.\n\x06\x66ields\x18\x02 \x03(\x0b\x32\x16.google.protobuf.FieldR\x06\x66ields\x12\x16\n\x06oneofs\x18\x03 \x03(\tR\x06oneofs\x12\x31\n\x07options\x18\x04 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x45\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12/\n\x06syntax\x18\x06 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x07 \x01(\tR\x07\x65\x64ition\"\xb4\x06\n\x05\x46ield\x12/\n\x04kind\x18\x01 \x01(\x0e\x32\x1b.google.protobuf.Field.KindR\x04kind\x12\x44\n\x0b\x63\x61rdinality\x18\x02 \x01(\x0e\x32\".google.protobuf.Field.CardinalityR\x0b\x63\x61rdinality\x12\x16\n\x06number\x18\x03 \x01(\x05R\x06number\x12\x12\n\x04name\x18\x04 \x01(\tR\x04name\x12\x19\n\x08type_url\x18\x06 \x01(\tR\x07typeUrl\x12\x1f\n\x0boneof_index\x18\x07 \x01(\x05R\noneofIndex\x12\x16\n\x06packed\x18\x08 \x01(\x08R\x06packed\x12\x31\n\x07options\x18\t \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x1b\n\tjson_name\x18\n \x01(\tR\x08jsonName\x12#\n\rdefault_value\x18\x0b \x01(\tR\x0c\x64\x65\x66\x61ultValue\"\xc8\x02\n\x04Kind\x12\x10\n\x0cTYPE_UNKNOWN\x10\x00\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"t\n\x0b\x43\x61rdinality\x12\x17\n\x13\x43\x41RDINALITY_UNKNOWN\x10\x00\x12\x18\n\x14\x43\x41RDINALITY_OPTIONAL\x10\x01\x12\x18\n\x14\x43\x41RDINALITY_REQUIRED\x10\x02\x12\x18\n\x14\x43\x41RDINALITY_REPEATED\x10\x03\"\x99\x02\n\x04\x45num\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x38\n\tenumvalue\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.EnumValueR\tenumvalue\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x45\n\x0esource_context\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12/\n\x06syntax\x18\x05 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x06 \x01(\tR\x07\x65\x64ition\"j\n\tEnumValue\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x16\n\x06number\x18\x02 \x01(\x05R\x06number\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\"H\n\x06Option\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12*\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.AnyR\x05value*C\n\x06Syntax\x12\x11\n\rSYNTAX_PROTO2\x10\x00\x12\x11\n\rSYNTAX_PROTO3\x10\x01\x12\x13\n\x0fSYNTAX_EDITIONS\x10\x02\x42{\n\x13\x63om.google.protobufB\tTypeProtoP\x01Z-google.golang.org/protobuf/types/known/typepb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.type_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\tTypeProtoP\001Z-google.golang.org/protobuf/types/known/typepb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_SYNTAX']._serialized_start=1699
|
||||
_globals['_SYNTAX']._serialized_end=1766
|
||||
_globals['_TYPE']._serialized_start=113
|
||||
_globals['_TYPE']._serialized_end=408
|
||||
_globals['_FIELD']._serialized_start=411
|
||||
_globals['_FIELD']._serialized_end=1231
|
||||
_globals['_FIELD_KIND']._serialized_start=785
|
||||
_globals['_FIELD_KIND']._serialized_end=1113
|
||||
_globals['_FIELD_CARDINALITY']._serialized_start=1115
|
||||
_globals['_FIELD_CARDINALITY']._serialized_end=1231
|
||||
_globals['_ENUM']._serialized_start=1234
|
||||
_globals['_ENUM']._serialized_end=1515
|
||||
_globals['_ENUMVALUE']._serialized_start=1517
|
||||
_globals['_ENUMVALUE']._serialized_end=1623
|
||||
_globals['_OPTION']._serialized_start=1625
|
||||
_globals['_OPTION']._serialized_end=1697
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,96 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains Unknown Fields APIs.
|
||||
|
||||
Simple usage example:
|
||||
unknown_field_set = UnknownFieldSet(message)
|
||||
for unknown_field in unknown_field_set:
|
||||
wire_type = unknown_field.wire_type
|
||||
field_number = unknown_field.field_number
|
||||
data = unknown_field.data
|
||||
"""
|
||||
|
||||
|
||||
from google.protobuf.internal import api_implementation
|
||||
|
||||
if api_implementation._c_module is not None: # pylint: disable=protected-access
|
||||
UnknownFieldSet = api_implementation._c_module.UnknownFieldSet # pylint: disable=protected-access
|
||||
else:
|
||||
from google.protobuf.internal import decoder # pylint: disable=g-import-not-at-top
|
||||
from google.protobuf.internal import wire_format # pylint: disable=g-import-not-at-top
|
||||
|
||||
class UnknownField:
|
||||
"""A parsed unknown field."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_field_number', '_wire_type', '_data']
|
||||
|
||||
def __init__(self, field_number, wire_type, data):
|
||||
self._field_number = field_number
|
||||
self._wire_type = wire_type
|
||||
self._data = data
|
||||
return
|
||||
|
||||
@property
|
||||
def field_number(self):
|
||||
return self._field_number
|
||||
|
||||
@property
|
||||
def wire_type(self):
|
||||
return self._wire_type
|
||||
|
||||
@property
|
||||
def data(self):
|
||||
return self._data
|
||||
|
||||
class UnknownFieldSet:
|
||||
"""UnknownField container."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_values']
|
||||
|
||||
def __init__(self, msg):
|
||||
|
||||
def InternalAdd(field_number, wire_type, data):
|
||||
unknown_field = UnknownField(field_number, wire_type, data)
|
||||
self._values.append(unknown_field)
|
||||
|
||||
self._values = []
|
||||
msg_des = msg.DESCRIPTOR
|
||||
# pylint: disable=protected-access
|
||||
unknown_fields = msg._unknown_fields
|
||||
if (msg_des.has_options and
|
||||
msg_des.GetOptions().message_set_wire_format):
|
||||
local_decoder = decoder.UnknownMessageSetItemDecoder()
|
||||
for _, buffer in unknown_fields:
|
||||
(field_number, data) = local_decoder(memoryview(buffer))
|
||||
InternalAdd(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED, data)
|
||||
else:
|
||||
for tag_bytes, buffer in unknown_fields:
|
||||
field_number, wire_type = decoder.DecodeTag(tag_bytes)
|
||||
if field_number == 0:
|
||||
raise RuntimeError('Field number 0 is illegal.')
|
||||
(data, _) = decoder._DecodeUnknownField(
|
||||
memoryview(buffer), 0, len(buffer), field_number, wire_type
|
||||
)
|
||||
InternalAdd(field_number, wire_type, data)
|
||||
|
||||
def __getitem__(self, index):
|
||||
size = len(self._values)
|
||||
if index < 0:
|
||||
index += size
|
||||
if index < 0 or index >= size:
|
||||
raise IndexError('index %d out of range'.index)
|
||||
|
||||
return self._values[index]
|
||||
|
||||
def __len__(self):
|
||||
return len(self._values)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self._values)
|
||||
@@ -0,0 +1,53 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/wrappers.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/wrappers.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/wrappers.proto\x12\x0fgoogle.protobuf\"#\n\x0b\x44oubleValue\x12\x14\n\x05value\x18\x01 \x01(\x01R\x05value\"\"\n\nFloatValue\x12\x14\n\x05value\x18\x01 \x01(\x02R\x05value\"\"\n\nInt64Value\x12\x14\n\x05value\x18\x01 \x01(\x03R\x05value\"#\n\x0bUInt64Value\x12\x14\n\x05value\x18\x01 \x01(\x04R\x05value\"\"\n\nInt32Value\x12\x14\n\x05value\x18\x01 \x01(\x05R\x05value\"#\n\x0bUInt32Value\x12\x14\n\x05value\x18\x01 \x01(\rR\x05value\"!\n\tBoolValue\x12\x14\n\x05value\x18\x01 \x01(\x08R\x05value\"#\n\x0bStringValue\x12\x14\n\x05value\x18\x01 \x01(\tR\x05value\"\"\n\nBytesValue\x12\x14\n\x05value\x18\x01 \x01(\x0cR\x05valueB\x83\x01\n\x13\x63om.google.protobufB\rWrappersProtoP\x01Z1google.golang.org/protobuf/types/known/wrapperspb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.wrappers_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\rWrappersProtoP\001Z1google.golang.org/protobuf/types/known/wrapperspb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_DOUBLEVALUE']._serialized_start=51
|
||||
_globals['_DOUBLEVALUE']._serialized_end=86
|
||||
_globals['_FLOATVALUE']._serialized_start=88
|
||||
_globals['_FLOATVALUE']._serialized_end=122
|
||||
_globals['_INT64VALUE']._serialized_start=124
|
||||
_globals['_INT64VALUE']._serialized_end=158
|
||||
_globals['_UINT64VALUE']._serialized_start=160
|
||||
_globals['_UINT64VALUE']._serialized_end=195
|
||||
_globals['_INT32VALUE']._serialized_start=197
|
||||
_globals['_INT32VALUE']._serialized_end=231
|
||||
_globals['_UINT32VALUE']._serialized_start=233
|
||||
_globals['_UINT32VALUE']._serialized_end=268
|
||||
_globals['_BOOLVALUE']._serialized_start=270
|
||||
_globals['_BOOLVALUE']._serialized_end=303
|
||||
_globals['_STRINGVALUE']._serialized_start=305
|
||||
_globals['_STRINGVALUE']._serialized_end=340
|
||||
_globals['_BYTESVALUE']._serialized_start=342
|
||||
_globals['_BYTESVALUE']._serialized_end=376
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,20 @@
|
||||
Copyright (c) 2021 Peter Odding
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@@ -0,0 +1,216 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: humanfriendly
|
||||
Version: 10.0
|
||||
Summary: Human friendly output for text interfaces using Python
|
||||
Home-page: https://humanfriendly.readthedocs.io
|
||||
Author: Peter Odding
|
||||
Author-email: peter@peterodding.com
|
||||
License: MIT
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 6 - Mature
|
||||
Classifier: Environment :: Console
|
||||
Classifier: Framework :: Sphinx :: Extension
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Intended Audience :: System Administrators
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Natural Language :: English
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
||||
Classifier: Topic :: Communications
|
||||
Classifier: Topic :: Scientific/Engineering :: Human Machine Interfaces
|
||||
Classifier: Topic :: Software Development
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Software Development :: User Interfaces
|
||||
Classifier: Topic :: System :: Shells
|
||||
Classifier: Topic :: System :: System Shells
|
||||
Classifier: Topic :: System :: Systems Administration
|
||||
Classifier: Topic :: Terminals
|
||||
Classifier: Topic :: Text Processing :: General
|
||||
Classifier: Topic :: Text Processing :: Linguistic
|
||||
Classifier: Topic :: Utilities
|
||||
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
|
||||
Requires-Dist: monotonic ; python_version == "2.7"
|
||||
Requires-Dist: pyreadline ; sys_platform == "win32" and python_version<"3.8"
|
||||
Requires-Dist: pyreadline3 ; sys_platform == "win32" and python_version>="3.8"
|
||||
|
||||
humanfriendly: Human friendly input/output in Python
|
||||
====================================================
|
||||
|
||||
.. image:: https://github.com/xolox/python-humanfriendly/actions/workflows/test.yml/badge.svg?branch=master
|
||||
:target: https://github.com/xolox/python-humanfriendly/actions
|
||||
|
||||
.. image:: https://codecov.io/gh/xolox/python-humanfriendly/branch/master/graph/badge.svg?token=jYaj4T74TU
|
||||
:target: https://codecov.io/gh/xolox/python-humanfriendly
|
||||
|
||||
The functions and classes in the `humanfriendly` package can be used to make
|
||||
text interfaces more user friendly. Some example features:
|
||||
|
||||
- Parsing and formatting numbers, file sizes, pathnames and timespans in
|
||||
simple, human friendly formats.
|
||||
|
||||
- Easy to use timers for long running operations, with human friendly
|
||||
formatting of the resulting timespans.
|
||||
|
||||
- Prompting the user to select a choice from a list of options by typing the
|
||||
option's number or a unique substring of the option.
|
||||
|
||||
- Terminal interaction including text styling (`ANSI escape sequences`_), user
|
||||
friendly rendering of usage messages and querying the terminal for its
|
||||
size.
|
||||
|
||||
The `humanfriendly` package is currently tested on Python 2.7, 3.5+ and PyPy
|
||||
(2.7) on Linux and macOS. While the intention is to support Windows as well,
|
||||
you may encounter some rough edges.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Getting started
|
||||
---------------
|
||||
|
||||
It's very simple to start using the `humanfriendly` package::
|
||||
|
||||
>>> from humanfriendly import format_size, parse_size
|
||||
>>> from humanfriendly.prompts import prompt_for_input
|
||||
>>> user_input = prompt_for_input("Enter a readable file size: ")
|
||||
|
||||
Enter a readable file size: 16G
|
||||
|
||||
>>> num_bytes = parse_size(user_input)
|
||||
>>> print(num_bytes)
|
||||
16000000000
|
||||
>>> print("You entered:", format_size(num_bytes))
|
||||
You entered: 16 GB
|
||||
>>> print("You entered:", format_size(num_bytes, binary=True))
|
||||
You entered: 14.9 GiB
|
||||
|
||||
To get a demonstration of supported terminal text styles (based on
|
||||
`ANSI escape sequences`_) you can run the following command::
|
||||
|
||||
$ humanfriendly --demo
|
||||
|
||||
Command line
|
||||
------------
|
||||
|
||||
.. A DRY solution to avoid duplication of the `humanfriendly --help' text:
|
||||
..
|
||||
.. [[[cog
|
||||
.. from humanfriendly.usage import inject_usage
|
||||
.. inject_usage('humanfriendly.cli')
|
||||
.. ]]]
|
||||
|
||||
**Usage:** `humanfriendly [OPTIONS]`
|
||||
|
||||
Human friendly input/output (text formatting) on the command
|
||||
line based on the Python package with the same name.
|
||||
|
||||
**Supported options:**
|
||||
|
||||
.. csv-table::
|
||||
:header: Option, Description
|
||||
:widths: 30, 70
|
||||
|
||||
|
||||
"``-c``, ``--run-command``","Execute an external command (given as the positional arguments) and render
|
||||
a spinner and timer while the command is running. The exit status of the
|
||||
command is propagated."
|
||||
``--format-table``,"Read tabular data from standard input (each line is a row and each
|
||||
whitespace separated field is a column), format the data as a table and
|
||||
print the resulting table to standard output. See also the ``--delimiter``
|
||||
option."
|
||||
"``-d``, ``--delimiter=VALUE``","Change the delimiter used by ``--format-table`` to ``VALUE`` (a string). By default
|
||||
all whitespace is treated as a delimiter."
|
||||
"``-l``, ``--format-length=LENGTH``","Convert a length count (given as the integer or float ``LENGTH``) into a human
|
||||
readable string and print that string to standard output."
|
||||
"``-n``, ``--format-number=VALUE``","Format a number (given as the integer or floating point number ``VALUE``) with
|
||||
thousands separators and two decimal places (if needed) and print the
|
||||
formatted number to standard output."
|
||||
"``-s``, ``--format-size=BYTES``","Convert a byte count (given as the integer ``BYTES``) into a human readable
|
||||
string and print that string to standard output."
|
||||
"``-b``, ``--binary``","Change the output of ``-s``, ``--format-size`` to use binary multiples of bytes
|
||||
(base-2) instead of the default decimal multiples of bytes (base-10)."
|
||||
"``-t``, ``--format-timespan=SECONDS``","Convert a number of seconds (given as the floating point number ``SECONDS``)
|
||||
into a human readable timespan and print that string to standard output."
|
||||
``--parse-length=VALUE``,"Parse a human readable length (given as the string ``VALUE``) and print the
|
||||
number of metres to standard output."
|
||||
``--parse-size=VALUE``,"Parse a human readable data size (given as the string ``VALUE``) and print the
|
||||
number of bytes to standard output."
|
||||
``--demo``,"Demonstrate changing the style and color of the terminal font using ANSI
|
||||
escape sequences."
|
||||
"``-h``, ``--help``",Show this message and exit.
|
||||
|
||||
.. [[[end]]]
|
||||
|
||||
A note about size units
|
||||
-----------------------
|
||||
|
||||
When I originally published the `humanfriendly` package I went with binary
|
||||
multiples of bytes (powers of two). It was pointed out several times that this
|
||||
was a poor choice (see issue `#4`_ and pull requests `#8`_ and `#9`_) and thus
|
||||
the new default became decimal multiples of bytes (powers of ten):
|
||||
|
||||
+------+---------------+---------------+
|
||||
| Unit | Binary value | Decimal value |
|
||||
+------+---------------+---------------+
|
||||
| KB | 1024 | 1000 +
|
||||
+------+---------------+---------------+
|
||||
| MB | 1048576 | 1000000 |
|
||||
+------+---------------+---------------+
|
||||
| GB | 1073741824 | 1000000000 |
|
||||
+------+---------------+---------------+
|
||||
| TB | 1099511627776 | 1000000000000 |
|
||||
+------+---------------+---------------+
|
||||
| etc | | |
|
||||
+------+---------------+---------------+
|
||||
|
||||
The option to use binary multiples of bytes remains by passing the keyword
|
||||
argument `binary=True` to the `format_size()`_ and `parse_size()`_ functions.
|
||||
|
||||
Windows support
|
||||
---------------
|
||||
|
||||
Windows 10 gained native support for ANSI escape sequences which means commands
|
||||
like ``humanfriendly --demo`` should work out of the box (if your system is
|
||||
up-to-date enough). If this doesn't work then you can install the colorama_
|
||||
package, it will be used automatically once installed.
|
||||
|
||||
Contact
|
||||
-------
|
||||
|
||||
The latest version of `humanfriendly` is available on PyPI_ and GitHub_. The
|
||||
documentation is hosted on `Read the Docs`_ and includes a changelog_. For bug
|
||||
reports please create an issue on GitHub_. If you have questions, suggestions,
|
||||
etc. feel free to send me an e-mail at `peter@peterodding.com`_.
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This software is licensed under the `MIT license`_.
|
||||
|
||||
© 2021 Peter Odding.
|
||||
|
||||
.. External references:
|
||||
.. _#4: https://github.com/xolox/python-humanfriendly/issues/4
|
||||
.. _#8: https://github.com/xolox/python-humanfriendly/pull/8
|
||||
.. _#9: https://github.com/xolox/python-humanfriendly/pull/9
|
||||
.. _ANSI escape sequences: https://en.wikipedia.org/wiki/ANSI_escape_code
|
||||
.. _changelog: https://humanfriendly.readthedocs.io/en/latest/changelog.html
|
||||
.. _colorama: https://pypi.org/project/colorama
|
||||
.. _format_size(): https://humanfriendly.readthedocs.io/en/latest/#humanfriendly.format_size
|
||||
.. _GitHub: https://github.com/xolox/python-humanfriendly
|
||||
.. _MIT license: https://en.wikipedia.org/wiki/MIT_License
|
||||
.. _parse_size(): https://humanfriendly.readthedocs.io/en/latest/#humanfriendly.parse_size
|
||||
.. _peter@peterodding.com: peter@peterodding.com
|
||||
.. _PyPI: https://pypi.org/project/humanfriendly
|
||||
.. _Read the Docs: https://humanfriendly.readthedocs.io
|
||||
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
../../../bin/humanfriendly,sha256=eFv1ubX0URLut03eF0LGbDYGsyW-I9Mq2YiJI2ofm2M,261
|
||||
humanfriendly-10.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
humanfriendly-10.0.dist-info/LICENSE.txt,sha256=SsSPQReAnyc0BmFQRQ8SCzuxEKwdOzIXB5XgVg27wfU,1056
|
||||
humanfriendly-10.0.dist-info/METADATA,sha256=aLs0k4jN_spgKsw0Vbg6ey_jy-hAJeJ0k7y-dvOrbII,9201
|
||||
humanfriendly-10.0.dist-info/RECORD,,
|
||||
humanfriendly-10.0.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110
|
||||
humanfriendly-10.0.dist-info/entry_points.txt,sha256=hU-ADsGls3mgf3plBt9B-oYLCtBDQiUJZ00khuAoqXc,58
|
||||
humanfriendly-10.0.dist-info/top_level.txt,sha256=7eKAKhckmlD4ZoWJkWmhnTs1pnP_bzF-56VTq0D7WIo,14
|
||||
humanfriendly/__init__.py,sha256=sPCMQv16m3p8xYwd3N3kUs1rewVJB3ZGDBkF0_8v6vo,31725
|
||||
humanfriendly/__pycache__/__init__.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/case.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/cli.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/compat.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/decorators.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/deprecation.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/prompts.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/sphinx.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/tables.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/testing.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/tests.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/text.cpython-312.pyc,,
|
||||
humanfriendly/__pycache__/usage.cpython-312.pyc,,
|
||||
humanfriendly/case.py,sha256=fkIinj4V1S8Xl3OCSYSgqKoGzCCbVJOjI5ZznViCT1Q,6008
|
||||
humanfriendly/cli.py,sha256=ZpGqTHLwfLjFACsQacYkN9DiBZkGecPzIHysliWN1i8,9822
|
||||
humanfriendly/compat.py,sha256=7qoFGMNFizGhs5BIyiMKTDUq29DxzuXGX4MRoZLo4ms,3984
|
||||
humanfriendly/decorators.py,sha256=ivxB-U9dfXUjCl4GZ8g_gKFC4a3uyxDVi-3rlxPjvJo,1501
|
||||
humanfriendly/deprecation.py,sha256=bdx1_T8L1gF635E41E6bYTqie9P3o7rY6n4wohjkLLk,9499
|
||||
humanfriendly/prompts.py,sha256=8PSJ1Hpr3ld6YkCaXQsTHAep9BtTKYlKmQi1RiaHIEc,16335
|
||||
humanfriendly/sphinx.py,sha256=BrFxK-rX3LN4iMqgNFuOTweYfppYp--O3HqWBlu05M0,11452
|
||||
humanfriendly/tables.py,sha256=lCDnEKyyZRmrIHrDlUzzVpg9z6lKOO8ldP4fh1gzMBI,13968
|
||||
humanfriendly/terminal/__init__.py,sha256=5BzxVHKriznclRjrWwYEk2l1ct0q0WdupJIrkuI9glc,30759
|
||||
humanfriendly/terminal/__pycache__/__init__.cpython-312.pyc,,
|
||||
humanfriendly/terminal/__pycache__/html.cpython-312.pyc,,
|
||||
humanfriendly/terminal/__pycache__/spinners.cpython-312.pyc,,
|
||||
humanfriendly/terminal/html.py,sha256=_csUZ4hID0ATwTvPXU8KrAe6ZzD_gUS7_tN3FmCHA4Q,16747
|
||||
humanfriendly/terminal/spinners.py,sha256=o7nkn8rBdTqr-XHIT972sIgIuxiDymQ5kPVkBzR7utE,11323
|
||||
humanfriendly/testing.py,sha256=hErsmBN5Crej5k1P_I0dhQK8IAwFr1IOZ9kQPIrZUys,24359
|
||||
humanfriendly/tests.py,sha256=3bkasrRgw0cAdZFHgmTo4DUHgXEnlHtRoq_t2MEafbc,68919
|
||||
humanfriendly/text.py,sha256=_WBG4SZ4bT5SH94zLUdfmB6NXbvc4gBYECEmQYXJ1sE,16212
|
||||
humanfriendly/usage.py,sha256=AhPo6DcvaBRIFGWNctU22McMzN3Ryc6ZCC83kt2Zbk4,13768
|
||||
@@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.34.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
[console_scripts]
|
||||
humanfriendly = humanfriendly.cli:main
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
humanfriendly
|
||||
@@ -0,0 +1,838 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: September 17, 2021
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""The main module of the `humanfriendly` package."""
|
||||
|
||||
# Standard library modules.
|
||||
import collections
|
||||
import datetime
|
||||
import decimal
|
||||
import numbers
|
||||
import os
|
||||
import os.path
|
||||
import re
|
||||
import time
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly.compat import is_string, monotonic
|
||||
from humanfriendly.deprecation import define_aliases
|
||||
from humanfriendly.text import concatenate, format, pluralize, tokenize
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = (
|
||||
'CombinedUnit',
|
||||
'InvalidDate',
|
||||
'InvalidLength',
|
||||
'InvalidSize',
|
||||
'InvalidTimespan',
|
||||
'SizeUnit',
|
||||
'Timer',
|
||||
'__version__',
|
||||
'coerce_boolean',
|
||||
'coerce_pattern',
|
||||
'coerce_seconds',
|
||||
'disk_size_units',
|
||||
'format_length',
|
||||
'format_number',
|
||||
'format_path',
|
||||
'format_size',
|
||||
'format_timespan',
|
||||
'length_size_units',
|
||||
'parse_date',
|
||||
'parse_length',
|
||||
'parse_path',
|
||||
'parse_size',
|
||||
'parse_timespan',
|
||||
'round_number',
|
||||
'time_units',
|
||||
)
|
||||
|
||||
# Semi-standard module versioning.
|
||||
__version__ = '10.0'
|
||||
|
||||
# Named tuples to define units of size.
|
||||
SizeUnit = collections.namedtuple('SizeUnit', 'divider, symbol, name')
|
||||
CombinedUnit = collections.namedtuple('CombinedUnit', 'decimal, binary')
|
||||
|
||||
# Common disk size units in binary (base-2) and decimal (base-10) multiples.
|
||||
disk_size_units = (
|
||||
CombinedUnit(SizeUnit(1000**1, 'KB', 'kilobyte'), SizeUnit(1024**1, 'KiB', 'kibibyte')),
|
||||
CombinedUnit(SizeUnit(1000**2, 'MB', 'megabyte'), SizeUnit(1024**2, 'MiB', 'mebibyte')),
|
||||
CombinedUnit(SizeUnit(1000**3, 'GB', 'gigabyte'), SizeUnit(1024**3, 'GiB', 'gibibyte')),
|
||||
CombinedUnit(SizeUnit(1000**4, 'TB', 'terabyte'), SizeUnit(1024**4, 'TiB', 'tebibyte')),
|
||||
CombinedUnit(SizeUnit(1000**5, 'PB', 'petabyte'), SizeUnit(1024**5, 'PiB', 'pebibyte')),
|
||||
CombinedUnit(SizeUnit(1000**6, 'EB', 'exabyte'), SizeUnit(1024**6, 'EiB', 'exbibyte')),
|
||||
CombinedUnit(SizeUnit(1000**7, 'ZB', 'zettabyte'), SizeUnit(1024**7, 'ZiB', 'zebibyte')),
|
||||
CombinedUnit(SizeUnit(1000**8, 'YB', 'yottabyte'), SizeUnit(1024**8, 'YiB', 'yobibyte')),
|
||||
)
|
||||
|
||||
# Common length size units, used for formatting and parsing.
|
||||
length_size_units = (dict(prefix='nm', divider=1e-09, singular='nm', plural='nm'),
|
||||
dict(prefix='mm', divider=1e-03, singular='mm', plural='mm'),
|
||||
dict(prefix='cm', divider=1e-02, singular='cm', plural='cm'),
|
||||
dict(prefix='m', divider=1, singular='metre', plural='metres'),
|
||||
dict(prefix='km', divider=1000, singular='km', plural='km'))
|
||||
|
||||
# Common time units, used for formatting of time spans.
|
||||
time_units = (dict(divider=1e-9, singular='nanosecond', plural='nanoseconds', abbreviations=['ns']),
|
||||
dict(divider=1e-6, singular='microsecond', plural='microseconds', abbreviations=['us']),
|
||||
dict(divider=1e-3, singular='millisecond', plural='milliseconds', abbreviations=['ms']),
|
||||
dict(divider=1, singular='second', plural='seconds', abbreviations=['s', 'sec', 'secs']),
|
||||
dict(divider=60, singular='minute', plural='minutes', abbreviations=['m', 'min', 'mins']),
|
||||
dict(divider=60 * 60, singular='hour', plural='hours', abbreviations=['h']),
|
||||
dict(divider=60 * 60 * 24, singular='day', plural='days', abbreviations=['d']),
|
||||
dict(divider=60 * 60 * 24 * 7, singular='week', plural='weeks', abbreviations=['w']),
|
||||
dict(divider=60 * 60 * 24 * 7 * 52, singular='year', plural='years', abbreviations=['y']))
|
||||
|
||||
|
||||
def coerce_boolean(value):
|
||||
"""
|
||||
Coerce any value to a boolean.
|
||||
|
||||
:param value: Any Python value. If the value is a string:
|
||||
|
||||
- The strings '1', 'yes', 'true' and 'on' are coerced to :data:`True`.
|
||||
- The strings '0', 'no', 'false' and 'off' are coerced to :data:`False`.
|
||||
- Other strings raise an exception.
|
||||
|
||||
Other Python values are coerced using :class:`bool`.
|
||||
:returns: A proper boolean value.
|
||||
:raises: :exc:`exceptions.ValueError` when the value is a string but
|
||||
cannot be coerced with certainty.
|
||||
"""
|
||||
if is_string(value):
|
||||
normalized = value.strip().lower()
|
||||
if normalized in ('1', 'yes', 'true', 'on'):
|
||||
return True
|
||||
elif normalized in ('0', 'no', 'false', 'off', ''):
|
||||
return False
|
||||
else:
|
||||
msg = "Failed to coerce string to boolean! (%r)"
|
||||
raise ValueError(format(msg, value))
|
||||
else:
|
||||
return bool(value)
|
||||
|
||||
|
||||
def coerce_pattern(value, flags=0):
|
||||
"""
|
||||
Coerce strings to compiled regular expressions.
|
||||
|
||||
:param value: A string containing a regular expression pattern
|
||||
or a compiled regular expression.
|
||||
:param flags: The flags used to compile the pattern (an integer).
|
||||
:returns: A compiled regular expression.
|
||||
:raises: :exc:`~exceptions.ValueError` when `value` isn't a string
|
||||
and also isn't a compiled regular expression.
|
||||
"""
|
||||
if is_string(value):
|
||||
value = re.compile(value, flags)
|
||||
else:
|
||||
empty_pattern = re.compile('')
|
||||
pattern_type = type(empty_pattern)
|
||||
if not isinstance(value, pattern_type):
|
||||
msg = "Failed to coerce value to compiled regular expression! (%r)"
|
||||
raise ValueError(format(msg, value))
|
||||
return value
|
||||
|
||||
|
||||
def coerce_seconds(value):
|
||||
"""
|
||||
Coerce a value to the number of seconds.
|
||||
|
||||
:param value: An :class:`int`, :class:`float` or
|
||||
:class:`datetime.timedelta` object.
|
||||
:returns: An :class:`int` or :class:`float` value.
|
||||
|
||||
When `value` is a :class:`datetime.timedelta` object the
|
||||
:meth:`~datetime.timedelta.total_seconds()` method is called.
|
||||
"""
|
||||
if isinstance(value, datetime.timedelta):
|
||||
return value.total_seconds()
|
||||
if not isinstance(value, numbers.Number):
|
||||
msg = "Failed to coerce value to number of seconds! (%r)"
|
||||
raise ValueError(format(msg, value))
|
||||
return value
|
||||
|
||||
|
||||
def format_size(num_bytes, keep_width=False, binary=False):
|
||||
"""
|
||||
Format a byte count as a human readable file size.
|
||||
|
||||
:param num_bytes: The size to format in bytes (an integer).
|
||||
:param keep_width: :data:`True` if trailing zeros should not be stripped,
|
||||
:data:`False` if they can be stripped.
|
||||
:param binary: :data:`True` to use binary multiples of bytes (base-2),
|
||||
:data:`False` to use decimal multiples of bytes (base-10).
|
||||
:returns: The corresponding human readable file size (a string).
|
||||
|
||||
This function knows how to format sizes in bytes, kilobytes, megabytes,
|
||||
gigabytes, terabytes and petabytes. Some examples:
|
||||
|
||||
>>> from humanfriendly import format_size
|
||||
>>> format_size(0)
|
||||
'0 bytes'
|
||||
>>> format_size(1)
|
||||
'1 byte'
|
||||
>>> format_size(5)
|
||||
'5 bytes'
|
||||
> format_size(1000)
|
||||
'1 KB'
|
||||
> format_size(1024, binary=True)
|
||||
'1 KiB'
|
||||
>>> format_size(1000 ** 3 * 4)
|
||||
'4 GB'
|
||||
"""
|
||||
for unit in reversed(disk_size_units):
|
||||
if num_bytes >= unit.binary.divider and binary:
|
||||
number = round_number(float(num_bytes) / unit.binary.divider, keep_width=keep_width)
|
||||
return pluralize(number, unit.binary.symbol, unit.binary.symbol)
|
||||
elif num_bytes >= unit.decimal.divider and not binary:
|
||||
number = round_number(float(num_bytes) / unit.decimal.divider, keep_width=keep_width)
|
||||
return pluralize(number, unit.decimal.symbol, unit.decimal.symbol)
|
||||
return pluralize(num_bytes, 'byte')
|
||||
|
||||
|
||||
def parse_size(size, binary=False):
|
||||
"""
|
||||
Parse a human readable data size and return the number of bytes.
|
||||
|
||||
:param size: The human readable file size to parse (a string).
|
||||
:param binary: :data:`True` to use binary multiples of bytes (base-2) for
|
||||
ambiguous unit symbols and names, :data:`False` to use
|
||||
decimal multiples of bytes (base-10).
|
||||
:returns: The corresponding size in bytes (an integer).
|
||||
:raises: :exc:`InvalidSize` when the input can't be parsed.
|
||||
|
||||
This function knows how to parse sizes in bytes, kilobytes, megabytes,
|
||||
gigabytes, terabytes and petabytes. Some examples:
|
||||
|
||||
>>> from humanfriendly import parse_size
|
||||
>>> parse_size('42')
|
||||
42
|
||||
>>> parse_size('13b')
|
||||
13
|
||||
>>> parse_size('5 bytes')
|
||||
5
|
||||
>>> parse_size('1 KB')
|
||||
1000
|
||||
>>> parse_size('1 kilobyte')
|
||||
1000
|
||||
>>> parse_size('1 KiB')
|
||||
1024
|
||||
>>> parse_size('1 KB', binary=True)
|
||||
1024
|
||||
>>> parse_size('1.5 GB')
|
||||
1500000000
|
||||
>>> parse_size('1.5 GB', binary=True)
|
||||
1610612736
|
||||
"""
|
||||
tokens = tokenize(size)
|
||||
if tokens and isinstance(tokens[0], numbers.Number):
|
||||
# Get the normalized unit (if any) from the tokenized input.
|
||||
normalized_unit = tokens[1].lower() if len(tokens) == 2 and is_string(tokens[1]) else ''
|
||||
# If the input contains only a number, it's assumed to be the number of
|
||||
# bytes. The second token can also explicitly reference the unit bytes.
|
||||
if len(tokens) == 1 or normalized_unit.startswith('b'):
|
||||
return int(tokens[0])
|
||||
# Otherwise we expect two tokens: A number and a unit.
|
||||
if normalized_unit:
|
||||
# Convert plural units to singular units, for details:
|
||||
# https://github.com/xolox/python-humanfriendly/issues/26
|
||||
normalized_unit = normalized_unit.rstrip('s')
|
||||
for unit in disk_size_units:
|
||||
# First we check for unambiguous symbols (KiB, MiB, GiB, etc)
|
||||
# and names (kibibyte, mebibyte, gibibyte, etc) because their
|
||||
# handling is always the same.
|
||||
if normalized_unit in (unit.binary.symbol.lower(), unit.binary.name.lower()):
|
||||
return int(tokens[0] * unit.binary.divider)
|
||||
# Now we will deal with ambiguous prefixes (K, M, G, etc),
|
||||
# symbols (KB, MB, GB, etc) and names (kilobyte, megabyte,
|
||||
# gigabyte, etc) according to the caller's preference.
|
||||
if (normalized_unit in (unit.decimal.symbol.lower(), unit.decimal.name.lower()) or
|
||||
normalized_unit.startswith(unit.decimal.symbol[0].lower())):
|
||||
return int(tokens[0] * (unit.binary.divider if binary else unit.decimal.divider))
|
||||
# We failed to parse the size specification.
|
||||
msg = "Failed to parse size! (input %r was tokenized as %r)"
|
||||
raise InvalidSize(format(msg, size, tokens))
|
||||
|
||||
|
||||
def format_length(num_metres, keep_width=False):
|
||||
"""
|
||||
Format a metre count as a human readable length.
|
||||
|
||||
:param num_metres: The length to format in metres (float / integer).
|
||||
:param keep_width: :data:`True` if trailing zeros should not be stripped,
|
||||
:data:`False` if they can be stripped.
|
||||
:returns: The corresponding human readable length (a string).
|
||||
|
||||
This function supports ranges from nanometres to kilometres.
|
||||
|
||||
Some examples:
|
||||
|
||||
>>> from humanfriendly import format_length
|
||||
>>> format_length(0)
|
||||
'0 metres'
|
||||
>>> format_length(1)
|
||||
'1 metre'
|
||||
>>> format_length(5)
|
||||
'5 metres'
|
||||
>>> format_length(1000)
|
||||
'1 km'
|
||||
>>> format_length(0.004)
|
||||
'4 mm'
|
||||
"""
|
||||
for unit in reversed(length_size_units):
|
||||
if num_metres >= unit['divider']:
|
||||
number = round_number(float(num_metres) / unit['divider'], keep_width=keep_width)
|
||||
return pluralize(number, unit['singular'], unit['plural'])
|
||||
return pluralize(num_metres, 'metre')
|
||||
|
||||
|
||||
def parse_length(length):
|
||||
"""
|
||||
Parse a human readable length and return the number of metres.
|
||||
|
||||
:param length: The human readable length to parse (a string).
|
||||
:returns: The corresponding length in metres (a float).
|
||||
:raises: :exc:`InvalidLength` when the input can't be parsed.
|
||||
|
||||
Some examples:
|
||||
|
||||
>>> from humanfriendly import parse_length
|
||||
>>> parse_length('42')
|
||||
42
|
||||
>>> parse_length('1 km')
|
||||
1000
|
||||
>>> parse_length('5mm')
|
||||
0.005
|
||||
>>> parse_length('15.3cm')
|
||||
0.153
|
||||
"""
|
||||
tokens = tokenize(length)
|
||||
if tokens and isinstance(tokens[0], numbers.Number):
|
||||
# If the input contains only a number, it's assumed to be the number of metres.
|
||||
if len(tokens) == 1:
|
||||
return tokens[0]
|
||||
# Otherwise we expect to find two tokens: A number and a unit.
|
||||
if len(tokens) == 2 and is_string(tokens[1]):
|
||||
normalized_unit = tokens[1].lower()
|
||||
# Try to match the first letter of the unit.
|
||||
for unit in length_size_units:
|
||||
if normalized_unit.startswith(unit['prefix']):
|
||||
return tokens[0] * unit['divider']
|
||||
# We failed to parse the length specification.
|
||||
msg = "Failed to parse length! (input %r was tokenized as %r)"
|
||||
raise InvalidLength(format(msg, length, tokens))
|
||||
|
||||
|
||||
def format_number(number, num_decimals=2):
|
||||
"""
|
||||
Format a number as a string including thousands separators.
|
||||
|
||||
:param number: The number to format (a number like an :class:`int`,
|
||||
:class:`long` or :class:`float`).
|
||||
:param num_decimals: The number of decimals to render (2 by default). If no
|
||||
decimal places are required to represent the number
|
||||
they will be omitted regardless of this argument.
|
||||
:returns: The formatted number (a string).
|
||||
|
||||
This function is intended to make it easier to recognize the order of size
|
||||
of the number being formatted.
|
||||
|
||||
Here's an example:
|
||||
|
||||
>>> from humanfriendly import format_number
|
||||
>>> print(format_number(6000000))
|
||||
6,000,000
|
||||
> print(format_number(6000000000.42))
|
||||
6,000,000,000.42
|
||||
> print(format_number(6000000000.42, num_decimals=0))
|
||||
6,000,000,000
|
||||
"""
|
||||
integer_part, _, decimal_part = str(float(number)).partition('.')
|
||||
negative_sign = integer_part.startswith('-')
|
||||
reversed_digits = ''.join(reversed(integer_part.lstrip('-')))
|
||||
parts = []
|
||||
while reversed_digits:
|
||||
parts.append(reversed_digits[:3])
|
||||
reversed_digits = reversed_digits[3:]
|
||||
formatted_number = ''.join(reversed(','.join(parts)))
|
||||
decimals_to_add = decimal_part[:num_decimals].rstrip('0')
|
||||
if decimals_to_add:
|
||||
formatted_number += '.' + decimals_to_add
|
||||
if negative_sign:
|
||||
formatted_number = '-' + formatted_number
|
||||
return formatted_number
|
||||
|
||||
|
||||
def round_number(count, keep_width=False):
|
||||
"""
|
||||
Round a floating point number to two decimal places in a human friendly format.
|
||||
|
||||
:param count: The number to format.
|
||||
:param keep_width: :data:`True` if trailing zeros should not be stripped,
|
||||
:data:`False` if they can be stripped.
|
||||
:returns: The formatted number as a string. If no decimal places are
|
||||
required to represent the number, they will be omitted.
|
||||
|
||||
The main purpose of this function is to be used by functions like
|
||||
:func:`format_length()`, :func:`format_size()` and
|
||||
:func:`format_timespan()`.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
>>> from humanfriendly import round_number
|
||||
>>> round_number(1)
|
||||
'1'
|
||||
>>> round_number(math.pi)
|
||||
'3.14'
|
||||
>>> round_number(5.001)
|
||||
'5'
|
||||
"""
|
||||
text = '%.2f' % float(count)
|
||||
if not keep_width:
|
||||
text = re.sub('0+$', '', text)
|
||||
text = re.sub(r'\.$', '', text)
|
||||
return text
|
||||
|
||||
|
||||
def format_timespan(num_seconds, detailed=False, max_units=3):
|
||||
"""
|
||||
Format a timespan in seconds as a human readable string.
|
||||
|
||||
:param num_seconds: Any value accepted by :func:`coerce_seconds()`.
|
||||
:param detailed: If :data:`True` milliseconds are represented separately
|
||||
instead of being represented as fractional seconds
|
||||
(defaults to :data:`False`).
|
||||
:param max_units: The maximum number of units to show in the formatted time
|
||||
span (an integer, defaults to three).
|
||||
:returns: The formatted timespan as a string.
|
||||
:raise: See :func:`coerce_seconds()`.
|
||||
|
||||
Some examples:
|
||||
|
||||
>>> from humanfriendly import format_timespan
|
||||
>>> format_timespan(0)
|
||||
'0 seconds'
|
||||
>>> format_timespan(1)
|
||||
'1 second'
|
||||
>>> import math
|
||||
>>> format_timespan(math.pi)
|
||||
'3.14 seconds'
|
||||
>>> hour = 60 * 60
|
||||
>>> day = hour * 24
|
||||
>>> week = day * 7
|
||||
>>> format_timespan(week * 52 + day * 2 + hour * 3)
|
||||
'1 year, 2 days and 3 hours'
|
||||
"""
|
||||
num_seconds = coerce_seconds(num_seconds)
|
||||
if num_seconds < 60 and not detailed:
|
||||
# Fast path.
|
||||
return pluralize(round_number(num_seconds), 'second')
|
||||
else:
|
||||
# Slow path.
|
||||
result = []
|
||||
num_seconds = decimal.Decimal(str(num_seconds))
|
||||
relevant_units = list(reversed(time_units[0 if detailed else 3:]))
|
||||
for unit in relevant_units:
|
||||
# Extract the unit count from the remaining time.
|
||||
divider = decimal.Decimal(str(unit['divider']))
|
||||
count = num_seconds / divider
|
||||
num_seconds %= divider
|
||||
# Round the unit count appropriately.
|
||||
if unit != relevant_units[-1]:
|
||||
# Integer rounding for all but the smallest unit.
|
||||
count = int(count)
|
||||
else:
|
||||
# Floating point rounding for the smallest unit.
|
||||
count = round_number(count)
|
||||
# Only include relevant units in the result.
|
||||
if count not in (0, '0'):
|
||||
result.append(pluralize(count, unit['singular'], unit['plural']))
|
||||
if len(result) == 1:
|
||||
# A single count/unit combination.
|
||||
return result[0]
|
||||
else:
|
||||
if not detailed:
|
||||
# Remove `insignificant' data from the formatted timespan.
|
||||
result = result[:max_units]
|
||||
# Format the timespan in a readable way.
|
||||
return concatenate(result)
|
||||
|
||||
|
||||
def parse_timespan(timespan):
|
||||
"""
|
||||
Parse a "human friendly" timespan into the number of seconds.
|
||||
|
||||
:param value: A string like ``5h`` (5 hours), ``10m`` (10 minutes) or
|
||||
``42s`` (42 seconds).
|
||||
:returns: The number of seconds as a floating point number.
|
||||
:raises: :exc:`InvalidTimespan` when the input can't be parsed.
|
||||
|
||||
Note that the :func:`parse_timespan()` function is not meant to be the
|
||||
"mirror image" of the :func:`format_timespan()` function. Instead it's
|
||||
meant to allow humans to easily and succinctly specify a timespan with a
|
||||
minimal amount of typing. It's very useful to accept easy to write time
|
||||
spans as e.g. command line arguments to programs.
|
||||
|
||||
The time units (and abbreviations) supported by this function are:
|
||||
|
||||
- ms, millisecond, milliseconds
|
||||
- s, sec, secs, second, seconds
|
||||
- m, min, mins, minute, minutes
|
||||
- h, hour, hours
|
||||
- d, day, days
|
||||
- w, week, weeks
|
||||
- y, year, years
|
||||
|
||||
Some examples:
|
||||
|
||||
>>> from humanfriendly import parse_timespan
|
||||
>>> parse_timespan('42')
|
||||
42.0
|
||||
>>> parse_timespan('42s')
|
||||
42.0
|
||||
>>> parse_timespan('1m')
|
||||
60.0
|
||||
>>> parse_timespan('1h')
|
||||
3600.0
|
||||
>>> parse_timespan('1d')
|
||||
86400.0
|
||||
"""
|
||||
tokens = tokenize(timespan)
|
||||
if tokens and isinstance(tokens[0], numbers.Number):
|
||||
# If the input contains only a number, it's assumed to be the number of seconds.
|
||||
if len(tokens) == 1:
|
||||
return float(tokens[0])
|
||||
# Otherwise we expect to find two tokens: A number and a unit.
|
||||
if len(tokens) == 2 and is_string(tokens[1]):
|
||||
normalized_unit = tokens[1].lower()
|
||||
for unit in time_units:
|
||||
if (normalized_unit == unit['singular'] or
|
||||
normalized_unit == unit['plural'] or
|
||||
normalized_unit in unit['abbreviations']):
|
||||
return float(tokens[0]) * unit['divider']
|
||||
# We failed to parse the timespan specification.
|
||||
msg = "Failed to parse timespan! (input %r was tokenized as %r)"
|
||||
raise InvalidTimespan(format(msg, timespan, tokens))
|
||||
|
||||
|
||||
def parse_date(datestring):
|
||||
"""
|
||||
Parse a date/time string into a tuple of integers.
|
||||
|
||||
:param datestring: The date/time string to parse.
|
||||
:returns: A tuple with the numbers ``(year, month, day, hour, minute,
|
||||
second)`` (all numbers are integers).
|
||||
:raises: :exc:`InvalidDate` when the date cannot be parsed.
|
||||
|
||||
Supported date/time formats:
|
||||
|
||||
- ``YYYY-MM-DD``
|
||||
- ``YYYY-MM-DD HH:MM:SS``
|
||||
|
||||
.. note:: If you want to parse date/time strings with a fixed, known
|
||||
format and :func:`parse_date()` isn't useful to you, consider
|
||||
:func:`time.strptime()` or :meth:`datetime.datetime.strptime()`,
|
||||
both of which are included in the Python standard library.
|
||||
Alternatively for more complex tasks consider using the date/time
|
||||
parsing module in the dateutil_ package.
|
||||
|
||||
Examples:
|
||||
|
||||
>>> from humanfriendly import parse_date
|
||||
>>> parse_date('2013-06-17')
|
||||
(2013, 6, 17, 0, 0, 0)
|
||||
>>> parse_date('2013-06-17 02:47:42')
|
||||
(2013, 6, 17, 2, 47, 42)
|
||||
|
||||
Here's how you convert the result to a number (`Unix time`_):
|
||||
|
||||
>>> from humanfriendly import parse_date
|
||||
>>> from time import mktime
|
||||
>>> mktime(parse_date('2013-06-17 02:47:42') + (-1, -1, -1))
|
||||
1371430062.0
|
||||
|
||||
And here's how you convert it to a :class:`datetime.datetime` object:
|
||||
|
||||
>>> from humanfriendly import parse_date
|
||||
>>> from datetime import datetime
|
||||
>>> datetime(*parse_date('2013-06-17 02:47:42'))
|
||||
datetime.datetime(2013, 6, 17, 2, 47, 42)
|
||||
|
||||
Here's an example that combines :func:`format_timespan()` and
|
||||
:func:`parse_date()` to calculate a human friendly timespan since a
|
||||
given date:
|
||||
|
||||
>>> from humanfriendly import format_timespan, parse_date
|
||||
>>> from time import mktime, time
|
||||
>>> unix_time = mktime(parse_date('2013-06-17 02:47:42') + (-1, -1, -1))
|
||||
>>> seconds_since_then = time() - unix_time
|
||||
>>> print(format_timespan(seconds_since_then))
|
||||
1 year, 43 weeks and 1 day
|
||||
|
||||
.. _dateutil: https://dateutil.readthedocs.io/en/latest/parser.html
|
||||
.. _Unix time: http://en.wikipedia.org/wiki/Unix_time
|
||||
"""
|
||||
try:
|
||||
tokens = [t.strip() for t in datestring.split()]
|
||||
if len(tokens) >= 2:
|
||||
date_parts = list(map(int, tokens[0].split('-'))) + [1, 1]
|
||||
time_parts = list(map(int, tokens[1].split(':'))) + [0, 0, 0]
|
||||
return tuple(date_parts[0:3] + time_parts[0:3])
|
||||
else:
|
||||
year, month, day = (list(map(int, datestring.split('-'))) + [1, 1])[0:3]
|
||||
return (year, month, day, 0, 0, 0)
|
||||
except Exception:
|
||||
msg = "Invalid date! (expected 'YYYY-MM-DD' or 'YYYY-MM-DD HH:MM:SS' but got: %r)"
|
||||
raise InvalidDate(format(msg, datestring))
|
||||
|
||||
|
||||
def format_path(pathname):
|
||||
"""
|
||||
Shorten a pathname to make it more human friendly.
|
||||
|
||||
:param pathname: An absolute pathname (a string).
|
||||
:returns: The pathname with the user's home directory abbreviated.
|
||||
|
||||
Given an absolute pathname, this function abbreviates the user's home
|
||||
directory to ``~/`` in order to shorten the pathname without losing
|
||||
information. It is not an error if the pathname is not relative to the
|
||||
current user's home directory.
|
||||
|
||||
Here's an example of its usage:
|
||||
|
||||
>>> from os import environ
|
||||
>>> from os.path import join
|
||||
>>> vimrc = join(environ['HOME'], '.vimrc')
|
||||
>>> vimrc
|
||||
'/home/peter/.vimrc'
|
||||
>>> from humanfriendly import format_path
|
||||
>>> format_path(vimrc)
|
||||
'~/.vimrc'
|
||||
"""
|
||||
pathname = os.path.abspath(pathname)
|
||||
home = os.environ.get('HOME')
|
||||
if home:
|
||||
home = os.path.abspath(home)
|
||||
if pathname.startswith(home):
|
||||
pathname = os.path.join('~', os.path.relpath(pathname, home))
|
||||
return pathname
|
||||
|
||||
|
||||
def parse_path(pathname):
|
||||
"""
|
||||
Convert a human friendly pathname to an absolute pathname.
|
||||
|
||||
Expands leading tildes using :func:`os.path.expanduser()` and
|
||||
environment variables using :func:`os.path.expandvars()` and makes the
|
||||
resulting pathname absolute using :func:`os.path.abspath()`.
|
||||
|
||||
:param pathname: A human friendly pathname (a string).
|
||||
:returns: An absolute pathname (a string).
|
||||
"""
|
||||
return os.path.abspath(os.path.expanduser(os.path.expandvars(pathname)))
|
||||
|
||||
|
||||
class Timer(object):
|
||||
|
||||
"""
|
||||
Easy to use timer to keep track of long during operations.
|
||||
"""
|
||||
|
||||
def __init__(self, start_time=None, resumable=False):
|
||||
"""
|
||||
Remember the time when the :class:`Timer` was created.
|
||||
|
||||
:param start_time: The start time (a float, defaults to the current time).
|
||||
:param resumable: Create a resumable timer (defaults to :data:`False`).
|
||||
|
||||
When `start_time` is given :class:`Timer` uses :func:`time.time()` as a
|
||||
clock source, otherwise it uses :func:`humanfriendly.compat.monotonic()`.
|
||||
"""
|
||||
if resumable:
|
||||
self.monotonic = True
|
||||
self.resumable = True
|
||||
self.start_time = 0.0
|
||||
self.total_time = 0.0
|
||||
elif start_time:
|
||||
self.monotonic = False
|
||||
self.resumable = False
|
||||
self.start_time = start_time
|
||||
else:
|
||||
self.monotonic = True
|
||||
self.resumable = False
|
||||
self.start_time = monotonic()
|
||||
|
||||
def __enter__(self):
|
||||
"""
|
||||
Start or resume counting elapsed time.
|
||||
|
||||
:returns: The :class:`Timer` object.
|
||||
:raises: :exc:`~exceptions.ValueError` when the timer isn't resumable.
|
||||
"""
|
||||
if not self.resumable:
|
||||
raise ValueError("Timer is not resumable!")
|
||||
self.start_time = monotonic()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
|
||||
"""
|
||||
Stop counting elapsed time.
|
||||
|
||||
:raises: :exc:`~exceptions.ValueError` when the timer isn't resumable.
|
||||
"""
|
||||
if not self.resumable:
|
||||
raise ValueError("Timer is not resumable!")
|
||||
if self.start_time:
|
||||
self.total_time += monotonic() - self.start_time
|
||||
self.start_time = 0.0
|
||||
|
||||
def sleep(self, seconds):
|
||||
"""
|
||||
Easy to use rate limiting of repeating actions.
|
||||
|
||||
:param seconds: The number of seconds to sleep (an
|
||||
integer or floating point number).
|
||||
|
||||
This method sleeps for the given number of seconds minus the
|
||||
:attr:`elapsed_time`. If the resulting duration is negative
|
||||
:func:`time.sleep()` will still be called, but the argument
|
||||
given to it will be the number 0 (negative numbers cause
|
||||
:func:`time.sleep()` to raise an exception).
|
||||
|
||||
The use case for this is to initialize a :class:`Timer` inside
|
||||
the body of a :keyword:`for` or :keyword:`while` loop and call
|
||||
:func:`Timer.sleep()` at the end of the loop body to rate limit
|
||||
whatever it is that is being done inside the loop body.
|
||||
|
||||
For posterity: Although the implementation of :func:`sleep()` only
|
||||
requires a single line of code I've added it to :mod:`humanfriendly`
|
||||
anyway because now that I've thought about how to tackle this once I
|
||||
never want to have to think about it again :-P (unless I find ways to
|
||||
improve this).
|
||||
"""
|
||||
time.sleep(max(0, seconds - self.elapsed_time))
|
||||
|
||||
@property
|
||||
def elapsed_time(self):
|
||||
"""
|
||||
Get the number of seconds counted so far.
|
||||
"""
|
||||
elapsed_time = 0
|
||||
if self.resumable:
|
||||
elapsed_time += self.total_time
|
||||
if self.start_time:
|
||||
current_time = monotonic() if self.monotonic else time.time()
|
||||
elapsed_time += current_time - self.start_time
|
||||
return elapsed_time
|
||||
|
||||
@property
|
||||
def rounded(self):
|
||||
"""Human readable timespan rounded to seconds (a string)."""
|
||||
return format_timespan(round(self.elapsed_time))
|
||||
|
||||
def __str__(self):
|
||||
"""Show the elapsed time since the :class:`Timer` was created."""
|
||||
return format_timespan(self.elapsed_time)
|
||||
|
||||
|
||||
class InvalidDate(Exception):
|
||||
|
||||
"""
|
||||
Raised when a string cannot be parsed into a date.
|
||||
|
||||
For example:
|
||||
|
||||
>>> from humanfriendly import parse_date
|
||||
>>> parse_date('2013-06-XY')
|
||||
Traceback (most recent call last):
|
||||
File "humanfriendly.py", line 206, in parse_date
|
||||
raise InvalidDate(format(msg, datestring))
|
||||
humanfriendly.InvalidDate: Invalid date! (expected 'YYYY-MM-DD' or 'YYYY-MM-DD HH:MM:SS' but got: '2013-06-XY')
|
||||
"""
|
||||
|
||||
|
||||
class InvalidSize(Exception):
|
||||
|
||||
"""
|
||||
Raised when a string cannot be parsed into a file size.
|
||||
|
||||
For example:
|
||||
|
||||
>>> from humanfriendly import parse_size
|
||||
>>> parse_size('5 Z')
|
||||
Traceback (most recent call last):
|
||||
File "humanfriendly/__init__.py", line 267, in parse_size
|
||||
raise InvalidSize(format(msg, size, tokens))
|
||||
humanfriendly.InvalidSize: Failed to parse size! (input '5 Z' was tokenized as [5, 'Z'])
|
||||
"""
|
||||
|
||||
|
||||
class InvalidLength(Exception):
|
||||
|
||||
"""
|
||||
Raised when a string cannot be parsed into a length.
|
||||
|
||||
For example:
|
||||
|
||||
>>> from humanfriendly import parse_length
|
||||
>>> parse_length('5 Z')
|
||||
Traceback (most recent call last):
|
||||
File "humanfriendly/__init__.py", line 267, in parse_length
|
||||
raise InvalidLength(format(msg, length, tokens))
|
||||
humanfriendly.InvalidLength: Failed to parse length! (input '5 Z' was tokenized as [5, 'Z'])
|
||||
"""
|
||||
|
||||
|
||||
class InvalidTimespan(Exception):
|
||||
|
||||
"""
|
||||
Raised when a string cannot be parsed into a timespan.
|
||||
|
||||
For example:
|
||||
|
||||
>>> from humanfriendly import parse_timespan
|
||||
>>> parse_timespan('1 age')
|
||||
Traceback (most recent call last):
|
||||
File "humanfriendly/__init__.py", line 419, in parse_timespan
|
||||
raise InvalidTimespan(format(msg, timespan, tokens))
|
||||
humanfriendly.InvalidTimespan: Failed to parse timespan! (input '1 age' was tokenized as [1, 'age'])
|
||||
"""
|
||||
|
||||
|
||||
# Define aliases for backwards compatibility.
|
||||
define_aliases(
|
||||
module_name=__name__,
|
||||
# In humanfriendly 1.23 the format_table() function was added to render a
|
||||
# table using characters like dashes and vertical bars to emulate borders.
|
||||
# Since then support for other tables has been added and the name of
|
||||
# format_table() has changed.
|
||||
format_table='humanfriendly.tables.format_pretty_table',
|
||||
# In humanfriendly 1.30 the following text manipulation functions were
|
||||
# moved out into a separate module to enable their usage in other modules
|
||||
# of the humanfriendly package (without causing circular imports).
|
||||
compact='humanfriendly.text.compact',
|
||||
concatenate='humanfriendly.text.concatenate',
|
||||
dedent='humanfriendly.text.dedent',
|
||||
format='humanfriendly.text.format',
|
||||
is_empty_line='humanfriendly.text.is_empty_line',
|
||||
pluralize='humanfriendly.text.pluralize',
|
||||
tokenize='humanfriendly.text.tokenize',
|
||||
trim_empty_lines='humanfriendly.text.trim_empty_lines',
|
||||
# In humanfriendly 1.38 the prompt_for_choice() function was moved out into a
|
||||
# separate module because several variants of interactive prompts were added.
|
||||
prompt_for_choice='humanfriendly.prompts.prompt_for_choice',
|
||||
# In humanfriendly 8.0 the Spinner class and minimum_spinner_interval
|
||||
# variable were extracted to a new module and the erase_line_code,
|
||||
# hide_cursor_code and show_cursor_code variables were moved.
|
||||
AutomaticSpinner='humanfriendly.terminal.spinners.AutomaticSpinner',
|
||||
Spinner='humanfriendly.terminal.spinners.Spinner',
|
||||
erase_line_code='humanfriendly.terminal.ANSI_ERASE_LINE',
|
||||
hide_cursor_code='humanfriendly.terminal.ANSI_SHOW_CURSOR',
|
||||
minimum_spinner_interval='humanfriendly.terminal.spinners.MINIMUM_INTERVAL',
|
||||
show_cursor_code='humanfriendly.terminal.ANSI_HIDE_CURSOR',
|
||||
)
|
||||
@@ -0,0 +1,157 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: April 19, 2020
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Simple case insensitive dictionaries.
|
||||
|
||||
The :class:`CaseInsensitiveDict` class is a dictionary whose string keys
|
||||
are case insensitive. It works by automatically coercing string keys to
|
||||
:class:`CaseInsensitiveKey` objects. Keys that are not strings are
|
||||
supported as well, just without case insensitivity.
|
||||
|
||||
At its core this module works by normalizing strings to lowercase before
|
||||
comparing or hashing them. It doesn't support proper case folding nor
|
||||
does it support Unicode normalization, hence the word "simple".
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import collections
|
||||
|
||||
try:
|
||||
# Python >= 3.3.
|
||||
from collections.abc import Iterable, Mapping
|
||||
except ImportError:
|
||||
# Python 2.7.
|
||||
from collections import Iterable, Mapping
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly.compat import basestring, unicode
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = ("CaseInsensitiveDict", "CaseInsensitiveKey")
|
||||
|
||||
|
||||
class CaseInsensitiveDict(collections.OrderedDict):
|
||||
|
||||
"""
|
||||
Simple case insensitive dictionary implementation (that remembers insertion order).
|
||||
|
||||
This class works by overriding methods that deal with dictionary keys to
|
||||
coerce string keys to :class:`CaseInsensitiveKey` objects before calling
|
||||
down to the regular dictionary handling methods. While intended to be
|
||||
complete this class has not been extensively tested yet.
|
||||
"""
|
||||
|
||||
def __init__(self, other=None, **kw):
|
||||
"""Initialize a :class:`CaseInsensitiveDict` object."""
|
||||
# Initialize our superclass.
|
||||
super(CaseInsensitiveDict, self).__init__()
|
||||
# Handle the initializer arguments.
|
||||
self.update(other, **kw)
|
||||
|
||||
def coerce_key(self, key):
|
||||
"""
|
||||
Coerce string keys to :class:`CaseInsensitiveKey` objects.
|
||||
|
||||
:param key: The value to coerce (any type).
|
||||
:returns: If `key` is a string then a :class:`CaseInsensitiveKey`
|
||||
object is returned, otherwise the value of `key` is
|
||||
returned unmodified.
|
||||
"""
|
||||
if isinstance(key, basestring):
|
||||
key = CaseInsensitiveKey(key)
|
||||
return key
|
||||
|
||||
@classmethod
|
||||
def fromkeys(cls, iterable, value=None):
|
||||
"""Create a case insensitive dictionary with keys from `iterable` and values set to `value`."""
|
||||
return cls((k, value) for k in iterable)
|
||||
|
||||
def get(self, key, default=None):
|
||||
"""Get the value of an existing item."""
|
||||
return super(CaseInsensitiveDict, self).get(self.coerce_key(key), default)
|
||||
|
||||
def pop(self, key, default=None):
|
||||
"""Remove an item from a case insensitive dictionary."""
|
||||
return super(CaseInsensitiveDict, self).pop(self.coerce_key(key), default)
|
||||
|
||||
def setdefault(self, key, default=None):
|
||||
"""Get the value of an existing item or add a new item."""
|
||||
return super(CaseInsensitiveDict, self).setdefault(self.coerce_key(key), default)
|
||||
|
||||
def update(self, other=None, **kw):
|
||||
"""Update a case insensitive dictionary with new items."""
|
||||
if isinstance(other, Mapping):
|
||||
# Copy the items from the given mapping.
|
||||
for key, value in other.items():
|
||||
self[key] = value
|
||||
elif isinstance(other, Iterable):
|
||||
# Copy the items from the given iterable.
|
||||
for key, value in other:
|
||||
self[key] = value
|
||||
elif other is not None:
|
||||
# Complain about unsupported values.
|
||||
msg = "'%s' object is not iterable"
|
||||
type_name = type(value).__name__
|
||||
raise TypeError(msg % type_name)
|
||||
# Copy the keyword arguments (if any).
|
||||
for key, value in kw.items():
|
||||
self[key] = value
|
||||
|
||||
def __contains__(self, key):
|
||||
"""Check if a case insensitive dictionary contains the given key."""
|
||||
return super(CaseInsensitiveDict, self).__contains__(self.coerce_key(key))
|
||||
|
||||
def __delitem__(self, key):
|
||||
"""Delete an item in a case insensitive dictionary."""
|
||||
return super(CaseInsensitiveDict, self).__delitem__(self.coerce_key(key))
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Get the value of an item in a case insensitive dictionary."""
|
||||
return super(CaseInsensitiveDict, self).__getitem__(self.coerce_key(key))
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
"""Set the value of an item in a case insensitive dictionary."""
|
||||
return super(CaseInsensitiveDict, self).__setitem__(self.coerce_key(key), value)
|
||||
|
||||
|
||||
class CaseInsensitiveKey(unicode):
|
||||
|
||||
"""
|
||||
Simple case insensitive dictionary key implementation.
|
||||
|
||||
The :class:`CaseInsensitiveKey` class provides an intentionally simple
|
||||
implementation of case insensitive strings to be used as dictionary keys.
|
||||
|
||||
If you need features like Unicode normalization or proper case folding
|
||||
please consider using a more advanced implementation like the :pypi:`istr`
|
||||
package instead.
|
||||
"""
|
||||
|
||||
def __new__(cls, value):
|
||||
"""Create a :class:`CaseInsensitiveKey` object."""
|
||||
# Delegate string object creation to our superclass.
|
||||
obj = unicode.__new__(cls, value)
|
||||
# Store the lowercased string and its hash value.
|
||||
normalized = obj.lower()
|
||||
obj._normalized = normalized
|
||||
obj._hash_value = hash(normalized)
|
||||
return obj
|
||||
|
||||
def __hash__(self):
|
||||
"""Get the hash value of the lowercased string."""
|
||||
return self._hash_value
|
||||
|
||||
def __eq__(self, other):
|
||||
"""Compare two strings as lowercase."""
|
||||
if isinstance(other, CaseInsensitiveKey):
|
||||
# Fast path (and the most common case): Comparison with same type.
|
||||
return self._normalized == other._normalized
|
||||
elif isinstance(other, unicode):
|
||||
# Slow path: Comparison with strings that need lowercasing.
|
||||
return self._normalized == other.lower()
|
||||
else:
|
||||
return NotImplemented
|
||||
@@ -0,0 +1,291 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: March 1, 2020
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Usage: humanfriendly [OPTIONS]
|
||||
|
||||
Human friendly input/output (text formatting) on the command
|
||||
line based on the Python package with the same name.
|
||||
|
||||
Supported options:
|
||||
|
||||
-c, --run-command
|
||||
|
||||
Execute an external command (given as the positional arguments) and render
|
||||
a spinner and timer while the command is running. The exit status of the
|
||||
command is propagated.
|
||||
|
||||
--format-table
|
||||
|
||||
Read tabular data from standard input (each line is a row and each
|
||||
whitespace separated field is a column), format the data as a table and
|
||||
print the resulting table to standard output. See also the --delimiter
|
||||
option.
|
||||
|
||||
-d, --delimiter=VALUE
|
||||
|
||||
Change the delimiter used by --format-table to VALUE (a string). By default
|
||||
all whitespace is treated as a delimiter.
|
||||
|
||||
-l, --format-length=LENGTH
|
||||
|
||||
Convert a length count (given as the integer or float LENGTH) into a human
|
||||
readable string and print that string to standard output.
|
||||
|
||||
-n, --format-number=VALUE
|
||||
|
||||
Format a number (given as the integer or floating point number VALUE) with
|
||||
thousands separators and two decimal places (if needed) and print the
|
||||
formatted number to standard output.
|
||||
|
||||
-s, --format-size=BYTES
|
||||
|
||||
Convert a byte count (given as the integer BYTES) into a human readable
|
||||
string and print that string to standard output.
|
||||
|
||||
-b, --binary
|
||||
|
||||
Change the output of -s, --format-size to use binary multiples of bytes
|
||||
(base-2) instead of the default decimal multiples of bytes (base-10).
|
||||
|
||||
-t, --format-timespan=SECONDS
|
||||
|
||||
Convert a number of seconds (given as the floating point number SECONDS)
|
||||
into a human readable timespan and print that string to standard output.
|
||||
|
||||
--parse-length=VALUE
|
||||
|
||||
Parse a human readable length (given as the string VALUE) and print the
|
||||
number of metres to standard output.
|
||||
|
||||
--parse-size=VALUE
|
||||
|
||||
Parse a human readable data size (given as the string VALUE) and print the
|
||||
number of bytes to standard output.
|
||||
|
||||
--demo
|
||||
|
||||
Demonstrate changing the style and color of the terminal font using ANSI
|
||||
escape sequences.
|
||||
|
||||
-h, --help
|
||||
|
||||
Show this message and exit.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import functools
|
||||
import getopt
|
||||
import pipes
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly import (
|
||||
Timer,
|
||||
format_length,
|
||||
format_number,
|
||||
format_size,
|
||||
format_timespan,
|
||||
parse_length,
|
||||
parse_size,
|
||||
)
|
||||
from humanfriendly.tables import format_pretty_table, format_smart_table
|
||||
from humanfriendly.terminal import (
|
||||
ANSI_COLOR_CODES,
|
||||
ANSI_TEXT_STYLES,
|
||||
HIGHLIGHT_COLOR,
|
||||
ansi_strip,
|
||||
ansi_wrap,
|
||||
enable_ansi_support,
|
||||
find_terminal_size,
|
||||
output,
|
||||
usage,
|
||||
warning,
|
||||
)
|
||||
from humanfriendly.terminal.spinners import Spinner
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = (
|
||||
'demonstrate_256_colors',
|
||||
'demonstrate_ansi_formatting',
|
||||
'main',
|
||||
'print_formatted_length',
|
||||
'print_formatted_number',
|
||||
'print_formatted_size',
|
||||
'print_formatted_table',
|
||||
'print_formatted_timespan',
|
||||
'print_parsed_length',
|
||||
'print_parsed_size',
|
||||
'run_command',
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
"""Command line interface for the ``humanfriendly`` program."""
|
||||
enable_ansi_support()
|
||||
try:
|
||||
options, arguments = getopt.getopt(sys.argv[1:], 'cd:l:n:s:bt:h', [
|
||||
'run-command', 'format-table', 'delimiter=', 'format-length=',
|
||||
'format-number=', 'format-size=', 'binary', 'format-timespan=',
|
||||
'parse-length=', 'parse-size=', 'demo', 'help',
|
||||
])
|
||||
except Exception as e:
|
||||
warning("Error: %s", e)
|
||||
sys.exit(1)
|
||||
actions = []
|
||||
delimiter = None
|
||||
should_format_table = False
|
||||
binary = any(o in ('-b', '--binary') for o, v in options)
|
||||
for option, value in options:
|
||||
if option in ('-d', '--delimiter'):
|
||||
delimiter = value
|
||||
elif option == '--parse-size':
|
||||
actions.append(functools.partial(print_parsed_size, value))
|
||||
elif option == '--parse-length':
|
||||
actions.append(functools.partial(print_parsed_length, value))
|
||||
elif option in ('-c', '--run-command'):
|
||||
actions.append(functools.partial(run_command, arguments))
|
||||
elif option in ('-l', '--format-length'):
|
||||
actions.append(functools.partial(print_formatted_length, value))
|
||||
elif option in ('-n', '--format-number'):
|
||||
actions.append(functools.partial(print_formatted_number, value))
|
||||
elif option in ('-s', '--format-size'):
|
||||
actions.append(functools.partial(print_formatted_size, value, binary))
|
||||
elif option == '--format-table':
|
||||
should_format_table = True
|
||||
elif option in ('-t', '--format-timespan'):
|
||||
actions.append(functools.partial(print_formatted_timespan, value))
|
||||
elif option == '--demo':
|
||||
actions.append(demonstrate_ansi_formatting)
|
||||
elif option in ('-h', '--help'):
|
||||
usage(__doc__)
|
||||
return
|
||||
if should_format_table:
|
||||
actions.append(functools.partial(print_formatted_table, delimiter))
|
||||
if not actions:
|
||||
usage(__doc__)
|
||||
return
|
||||
for partial in actions:
|
||||
partial()
|
||||
|
||||
|
||||
def run_command(command_line):
|
||||
"""Run an external command and show a spinner while the command is running."""
|
||||
timer = Timer()
|
||||
spinner_label = "Waiting for command: %s" % " ".join(map(pipes.quote, command_line))
|
||||
with Spinner(label=spinner_label, timer=timer) as spinner:
|
||||
process = subprocess.Popen(command_line)
|
||||
while True:
|
||||
spinner.step()
|
||||
spinner.sleep()
|
||||
if process.poll() is not None:
|
||||
break
|
||||
sys.exit(process.returncode)
|
||||
|
||||
|
||||
def print_formatted_length(value):
|
||||
"""Print a human readable length."""
|
||||
if '.' in value:
|
||||
output(format_length(float(value)))
|
||||
else:
|
||||
output(format_length(int(value)))
|
||||
|
||||
|
||||
def print_formatted_number(value):
|
||||
"""Print large numbers in a human readable format."""
|
||||
output(format_number(float(value)))
|
||||
|
||||
|
||||
def print_formatted_size(value, binary):
|
||||
"""Print a human readable size."""
|
||||
output(format_size(int(value), binary=binary))
|
||||
|
||||
|
||||
def print_formatted_table(delimiter):
|
||||
"""Read tabular data from standard input and print a table."""
|
||||
data = []
|
||||
for line in sys.stdin:
|
||||
line = line.rstrip()
|
||||
data.append(line.split(delimiter))
|
||||
output(format_pretty_table(data))
|
||||
|
||||
|
||||
def print_formatted_timespan(value):
|
||||
"""Print a human readable timespan."""
|
||||
output(format_timespan(float(value)))
|
||||
|
||||
|
||||
def print_parsed_length(value):
|
||||
"""Parse a human readable length and print the number of metres."""
|
||||
output(parse_length(value))
|
||||
|
||||
|
||||
def print_parsed_size(value):
|
||||
"""Parse a human readable data size and print the number of bytes."""
|
||||
output(parse_size(value))
|
||||
|
||||
|
||||
def demonstrate_ansi_formatting():
|
||||
"""Demonstrate the use of ANSI escape sequences."""
|
||||
# First we demonstrate the supported text styles.
|
||||
output('%s', ansi_wrap('Text styles:', bold=True))
|
||||
styles = ['normal', 'bright']
|
||||
styles.extend(ANSI_TEXT_STYLES.keys())
|
||||
for style_name in sorted(styles):
|
||||
options = dict(color=HIGHLIGHT_COLOR)
|
||||
if style_name != 'normal':
|
||||
options[style_name] = True
|
||||
style_label = style_name.replace('_', ' ').capitalize()
|
||||
output(' - %s', ansi_wrap(style_label, **options))
|
||||
# Now we demonstrate named foreground and background colors.
|
||||
for color_type, color_label in (('color', 'Foreground colors'),
|
||||
('background', 'Background colors')):
|
||||
intensities = [
|
||||
('normal', dict()),
|
||||
('bright', dict(bright=True)),
|
||||
]
|
||||
if color_type != 'background':
|
||||
intensities.insert(0, ('faint', dict(faint=True)))
|
||||
output('\n%s' % ansi_wrap('%s:' % color_label, bold=True))
|
||||
output(format_smart_table([
|
||||
[color_name] + [
|
||||
ansi_wrap(
|
||||
'XXXXXX' if color_type != 'background' else (' ' * 6),
|
||||
**dict(list(kw.items()) + [(color_type, color_name)])
|
||||
) for label, kw in intensities
|
||||
] for color_name in sorted(ANSI_COLOR_CODES.keys())
|
||||
], column_names=['Color'] + [
|
||||
label.capitalize() for label, kw in intensities
|
||||
]))
|
||||
# Demonstrate support for 256 colors as well.
|
||||
demonstrate_256_colors(0, 7, 'standard colors')
|
||||
demonstrate_256_colors(8, 15, 'high-intensity colors')
|
||||
demonstrate_256_colors(16, 231, '216 colors')
|
||||
demonstrate_256_colors(232, 255, 'gray scale colors')
|
||||
|
||||
|
||||
def demonstrate_256_colors(i, j, group=None):
|
||||
"""Demonstrate 256 color mode support."""
|
||||
# Generate the label.
|
||||
label = '256 color mode'
|
||||
if group:
|
||||
label += ' (%s)' % group
|
||||
output('\n' + ansi_wrap('%s:' % label, bold=True))
|
||||
# Generate a simple rendering of the colors in the requested range and
|
||||
# check if it will fit on a single line (given the terminal's width).
|
||||
single_line = ''.join(' ' + ansi_wrap(str(n), color=n) for n in range(i, j + 1))
|
||||
lines, columns = find_terminal_size()
|
||||
if columns >= len(ansi_strip(single_line)):
|
||||
output(single_line)
|
||||
else:
|
||||
# Generate a more complex rendering of the colors that will nicely wrap
|
||||
# over multiple lines without using too many lines.
|
||||
width = len(str(j)) + 1
|
||||
colors_per_line = int(columns / width)
|
||||
colors = [ansi_wrap(str(n).rjust(width), color=n) for n in range(i, j + 1)]
|
||||
blocks = [colors[n:n + colors_per_line] for n in range(0, len(colors), colors_per_line)]
|
||||
output('\n'.join(''.join(b) for b in blocks))
|
||||
@@ -0,0 +1,146 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: September 17, 2021
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Compatibility with Python 2 and 3.
|
||||
|
||||
This module exposes aliases and functions that make it easier to write Python
|
||||
code that is compatible with Python 2 and Python 3.
|
||||
|
||||
.. data:: basestring
|
||||
|
||||
Alias for :func:`python2:basestring` (in Python 2) or :class:`python3:str`
|
||||
(in Python 3). See also :func:`is_string()`.
|
||||
|
||||
.. data:: HTMLParser
|
||||
|
||||
Alias for :class:`python2:HTMLParser.HTMLParser` (in Python 2) or
|
||||
:class:`python3:html.parser.HTMLParser` (in Python 3).
|
||||
|
||||
.. data:: interactive_prompt
|
||||
|
||||
Alias for :func:`python2:raw_input()` (in Python 2) or
|
||||
:func:`python3:input()` (in Python 3).
|
||||
|
||||
.. data:: StringIO
|
||||
|
||||
Alias for :class:`python2:StringIO.StringIO` (in Python 2) or
|
||||
:class:`python3:io.StringIO` (in Python 3).
|
||||
|
||||
.. data:: unicode
|
||||
|
||||
Alias for :func:`python2:unicode` (in Python 2) or :class:`python3:str` (in
|
||||
Python 3). See also :func:`coerce_string()`.
|
||||
|
||||
.. data:: monotonic
|
||||
|
||||
Alias for :func:`python3:time.monotonic()` (in Python 3.3 and higher) or
|
||||
`monotonic.monotonic()` (a `conditional dependency
|
||||
<https://pypi.org/project/monotonic/>`_ on older Python versions).
|
||||
"""
|
||||
|
||||
__all__ = (
|
||||
'HTMLParser',
|
||||
'StringIO',
|
||||
'basestring',
|
||||
'coerce_string',
|
||||
'interactive_prompt',
|
||||
'is_string',
|
||||
'is_unicode',
|
||||
'monotonic',
|
||||
'name2codepoint',
|
||||
'on_macos',
|
||||
'on_windows',
|
||||
'unichr',
|
||||
'unicode',
|
||||
'which',
|
||||
)
|
||||
|
||||
# Standard library modules.
|
||||
import sys
|
||||
|
||||
# Differences between Python 2 and 3.
|
||||
try:
|
||||
# Python 2.
|
||||
unicode = unicode
|
||||
unichr = unichr
|
||||
basestring = basestring
|
||||
interactive_prompt = raw_input
|
||||
from distutils.spawn import find_executable as which
|
||||
from HTMLParser import HTMLParser
|
||||
from StringIO import StringIO
|
||||
from htmlentitydefs import name2codepoint
|
||||
except (ImportError, NameError):
|
||||
# Python 3.
|
||||
unicode = str
|
||||
unichr = chr
|
||||
basestring = str
|
||||
interactive_prompt = input
|
||||
from shutil import which
|
||||
from html.parser import HTMLParser
|
||||
from io import StringIO
|
||||
from html.entities import name2codepoint
|
||||
|
||||
try:
|
||||
# Python 3.3 and higher.
|
||||
from time import monotonic
|
||||
except ImportError:
|
||||
# A replacement for older Python versions:
|
||||
# https://pypi.org/project/monotonic/
|
||||
try:
|
||||
from monotonic import monotonic
|
||||
except (ImportError, RuntimeError):
|
||||
# We fall back to the old behavior of using time.time() instead of
|
||||
# failing when {time,monotonic}.monotonic() are both missing.
|
||||
from time import time as monotonic
|
||||
|
||||
|
||||
def coerce_string(value):
|
||||
"""
|
||||
Coerce any value to a Unicode string (:func:`python2:unicode` in Python 2 and :class:`python3:str` in Python 3).
|
||||
|
||||
:param value: The value to coerce.
|
||||
:returns: The value coerced to a Unicode string.
|
||||
"""
|
||||
return value if is_string(value) else unicode(value)
|
||||
|
||||
|
||||
def is_string(value):
|
||||
"""
|
||||
Check if a value is a :func:`python2:basestring` (in Python 2) or :class:`python3:str` (in Python 3) object.
|
||||
|
||||
:param value: The value to check.
|
||||
:returns: :data:`True` if the value is a string, :data:`False` otherwise.
|
||||
"""
|
||||
return isinstance(value, basestring)
|
||||
|
||||
|
||||
def is_unicode(value):
|
||||
"""
|
||||
Check if a value is a :func:`python2:unicode` (in Python 2) or :class:`python2:str` (in Python 3) object.
|
||||
|
||||
:param value: The value to check.
|
||||
:returns: :data:`True` if the value is a Unicode string, :data:`False` otherwise.
|
||||
"""
|
||||
return isinstance(value, unicode)
|
||||
|
||||
|
||||
def on_macos():
|
||||
"""
|
||||
Check if we're running on Apple MacOS.
|
||||
|
||||
:returns: :data:`True` if running MacOS, :data:`False` otherwise.
|
||||
"""
|
||||
return sys.platform.startswith('darwin')
|
||||
|
||||
|
||||
def on_windows():
|
||||
"""
|
||||
Check if we're running on the Microsoft Windows OS.
|
||||
|
||||
:returns: :data:`True` if running Windows, :data:`False` otherwise.
|
||||
"""
|
||||
return sys.platform.startswith('win')
|
||||
@@ -0,0 +1,43 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: March 2, 2020
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""Simple function decorators to make Python programming easier."""
|
||||
|
||||
# Standard library modules.
|
||||
import functools
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = ('RESULTS_ATTRIBUTE', 'cached')
|
||||
|
||||
RESULTS_ATTRIBUTE = 'cached_results'
|
||||
"""The name of the property used to cache the return values of functions (a string)."""
|
||||
|
||||
|
||||
def cached(function):
|
||||
"""
|
||||
Rudimentary caching decorator for functions.
|
||||
|
||||
:param function: The function whose return value should be cached.
|
||||
:returns: The decorated function.
|
||||
|
||||
The given function will only be called once, the first time the wrapper
|
||||
function is called. The return value is cached by the wrapper function as
|
||||
an attribute of the given function and returned on each subsequent call.
|
||||
|
||||
.. note:: Currently no function arguments are supported because only a
|
||||
single return value can be cached. Accepting any function
|
||||
arguments at all would imply that the cache is parametrized on
|
||||
function arguments, which is not currently the case.
|
||||
"""
|
||||
@functools.wraps(function)
|
||||
def wrapper():
|
||||
try:
|
||||
return getattr(wrapper, RESULTS_ATTRIBUTE)
|
||||
except AttributeError:
|
||||
result = function()
|
||||
setattr(wrapper, RESULTS_ATTRIBUTE, result)
|
||||
return result
|
||||
return wrapper
|
||||
@@ -0,0 +1,251 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: March 2, 2020
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Support for deprecation warnings when importing names from old locations.
|
||||
|
||||
When software evolves, things tend to move around. This is usually detrimental
|
||||
to backwards compatibility (in Python this primarily manifests itself as
|
||||
:exc:`~exceptions.ImportError` exceptions).
|
||||
|
||||
While backwards compatibility is very important, it should not get in the way
|
||||
of progress. It would be great to have the agility to move things around
|
||||
without breaking backwards compatibility.
|
||||
|
||||
This is where the :mod:`humanfriendly.deprecation` module comes in: It enables
|
||||
the definition of backwards compatible aliases that emit a deprecation warning
|
||||
when they are accessed.
|
||||
|
||||
The way it works is that it wraps the original module in an :class:`DeprecationProxy`
|
||||
object that defines a :func:`~DeprecationProxy.__getattr__()` special method to
|
||||
override attribute access of the module.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import collections
|
||||
import functools
|
||||
import importlib
|
||||
import inspect
|
||||
import sys
|
||||
import types
|
||||
import warnings
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly.text import format
|
||||
|
||||
# Registry of known aliases (used by humanfriendly.sphinx).
|
||||
REGISTRY = collections.defaultdict(dict)
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = ("DeprecationProxy", "define_aliases", "deprecated_args", "get_aliases", "is_method")
|
||||
|
||||
|
||||
def define_aliases(module_name, **aliases):
|
||||
"""
|
||||
Update a module with backwards compatible aliases.
|
||||
|
||||
:param module_name: The ``__name__`` of the module (a string).
|
||||
:param aliases: Each keyword argument defines an alias. The values
|
||||
are expected to be "dotted paths" (strings).
|
||||
|
||||
The behavior of this function depends on whether the Sphinx documentation
|
||||
generator is active, because the use of :class:`DeprecationProxy` to shadow the
|
||||
real module in :data:`sys.modules` has the unintended side effect of
|
||||
breaking autodoc support for ``:data:`` members (module variables).
|
||||
|
||||
To avoid breaking Sphinx the proxy object is omitted and instead the
|
||||
aliased names are injected into the original module namespace, to make sure
|
||||
that imports can be satisfied when the documentation is being rendered.
|
||||
|
||||
If you run into cyclic dependencies caused by :func:`define_aliases()` when
|
||||
running Sphinx, you can try moving the call to :func:`define_aliases()` to
|
||||
the bottom of the Python module you're working on.
|
||||
"""
|
||||
module = sys.modules[module_name]
|
||||
proxy = DeprecationProxy(module, aliases)
|
||||
# Populate the registry of aliases.
|
||||
for name, target in aliases.items():
|
||||
REGISTRY[module.__name__][name] = target
|
||||
# Avoid confusing Sphinx.
|
||||
if "sphinx" in sys.modules:
|
||||
for name, target in aliases.items():
|
||||
setattr(module, name, proxy.resolve(target))
|
||||
else:
|
||||
# Install a proxy object to raise DeprecationWarning.
|
||||
sys.modules[module_name] = proxy
|
||||
|
||||
|
||||
def get_aliases(module_name):
|
||||
"""
|
||||
Get the aliases defined by a module.
|
||||
|
||||
:param module_name: The ``__name__`` of the module (a string).
|
||||
:returns: A dictionary with string keys and values:
|
||||
|
||||
1. Each key gives the name of an alias
|
||||
created for backwards compatibility.
|
||||
|
||||
2. Each value gives the dotted path of
|
||||
the proper location of the identifier.
|
||||
|
||||
An empty dictionary is returned for modules that
|
||||
don't define any backwards compatible aliases.
|
||||
"""
|
||||
return REGISTRY.get(module_name, {})
|
||||
|
||||
|
||||
def deprecated_args(*names):
|
||||
"""
|
||||
Deprecate positional arguments without dropping backwards compatibility.
|
||||
|
||||
:param names:
|
||||
|
||||
The positional arguments to :func:`deprecated_args()` give the names of
|
||||
the positional arguments that the to-be-decorated function should warn
|
||||
about being deprecated and translate to keyword arguments.
|
||||
|
||||
:returns: A decorator function specialized to `names`.
|
||||
|
||||
The :func:`deprecated_args()` decorator function was created to make it
|
||||
easy to switch from positional arguments to keyword arguments [#]_ while
|
||||
preserving backwards compatibility [#]_ and informing call sites
|
||||
about the change.
|
||||
|
||||
.. [#] Increased flexibility is the main reason why I find myself switching
|
||||
from positional arguments to (optional) keyword arguments as my code
|
||||
evolves to support more use cases.
|
||||
|
||||
.. [#] In my experience positional argument order implicitly becomes part
|
||||
of API compatibility whether intended or not. While this makes sense
|
||||
for functions that over time adopt more and more optional arguments,
|
||||
at a certain point it becomes an inconvenience to code maintenance.
|
||||
|
||||
Here's an example of how to use the decorator::
|
||||
|
||||
@deprecated_args('text')
|
||||
def report_choice(**options):
|
||||
print(options['text'])
|
||||
|
||||
When the decorated function is called with positional arguments
|
||||
a deprecation warning is given::
|
||||
|
||||
>>> report_choice('this will give a deprecation warning')
|
||||
DeprecationWarning: report_choice has deprecated positional arguments, please switch to keyword arguments
|
||||
this will give a deprecation warning
|
||||
|
||||
But when the function is called with keyword arguments no deprecation
|
||||
warning is emitted::
|
||||
|
||||
>>> report_choice(text='this will not give a deprecation warning')
|
||||
this will not give a deprecation warning
|
||||
"""
|
||||
def decorator(function):
|
||||
def translate(args, kw):
|
||||
# Raise TypeError when too many positional arguments are passed to the decorated function.
|
||||
if len(args) > len(names):
|
||||
raise TypeError(
|
||||
format(
|
||||
"{name} expected at most {limit} arguments, got {count}",
|
||||
name=function.__name__,
|
||||
limit=len(names),
|
||||
count=len(args),
|
||||
)
|
||||
)
|
||||
# Emit a deprecation warning when positional arguments are used.
|
||||
if args:
|
||||
warnings.warn(
|
||||
format(
|
||||
"{name} has deprecated positional arguments, please switch to keyword arguments",
|
||||
name=function.__name__,
|
||||
),
|
||||
category=DeprecationWarning,
|
||||
stacklevel=3,
|
||||
)
|
||||
# Translate positional arguments to keyword arguments.
|
||||
for name, value in zip(names, args):
|
||||
kw[name] = value
|
||||
if is_method(function):
|
||||
@functools.wraps(function)
|
||||
def wrapper(*args, **kw):
|
||||
"""Wrapper for instance methods."""
|
||||
args = list(args)
|
||||
self = args.pop(0)
|
||||
translate(args, kw)
|
||||
return function(self, **kw)
|
||||
else:
|
||||
@functools.wraps(function)
|
||||
def wrapper(*args, **kw):
|
||||
"""Wrapper for module level functions."""
|
||||
translate(args, kw)
|
||||
return function(**kw)
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
|
||||
def is_method(function):
|
||||
"""Check if the expected usage of the given function is as an instance method."""
|
||||
try:
|
||||
# Python 3.3 and newer.
|
||||
signature = inspect.signature(function)
|
||||
return "self" in signature.parameters
|
||||
except AttributeError:
|
||||
# Python 3.2 and older.
|
||||
metadata = inspect.getargspec(function)
|
||||
return "self" in metadata.args
|
||||
|
||||
|
||||
class DeprecationProxy(types.ModuleType):
|
||||
|
||||
"""Emit deprecation warnings for imports that should be updated."""
|
||||
|
||||
def __init__(self, module, aliases):
|
||||
"""
|
||||
Initialize an :class:`DeprecationProxy` object.
|
||||
|
||||
:param module: The original module object.
|
||||
:param aliases: A dictionary of aliases.
|
||||
"""
|
||||
# Initialize our superclass.
|
||||
super(DeprecationProxy, self).__init__(name=module.__name__)
|
||||
# Store initializer arguments.
|
||||
self.module = module
|
||||
self.aliases = aliases
|
||||
|
||||
def __getattr__(self, name):
|
||||
"""
|
||||
Override module attribute lookup.
|
||||
|
||||
:param name: The name to look up (a string).
|
||||
:returns: The attribute value.
|
||||
"""
|
||||
# Check if the given name is an alias.
|
||||
target = self.aliases.get(name)
|
||||
if target is not None:
|
||||
# Emit the deprecation warning.
|
||||
warnings.warn(
|
||||
format("%s.%s was moved to %s, please update your imports", self.module.__name__, name, target),
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
# Resolve the dotted path.
|
||||
return self.resolve(target)
|
||||
# Look up the name in the original module namespace.
|
||||
value = getattr(self.module, name, None)
|
||||
if value is not None:
|
||||
return value
|
||||
# Fall back to the default behavior.
|
||||
raise AttributeError(format("module '%s' has no attribute '%s'", self.module.__name__, name))
|
||||
|
||||
def resolve(self, target):
|
||||
"""
|
||||
Look up the target of an alias.
|
||||
|
||||
:param target: The fully qualified dotted path (a string).
|
||||
:returns: The value of the given target.
|
||||
"""
|
||||
module_name, _, member = target.rpartition(".")
|
||||
module = importlib.import_module(module_name)
|
||||
return getattr(module, member)
|
||||
@@ -0,0 +1,376 @@
|
||||
# vim: fileencoding=utf-8
|
||||
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: February 9, 2020
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Interactive terminal prompts.
|
||||
|
||||
The :mod:`~humanfriendly.prompts` module enables interaction with the user
|
||||
(operator) by asking for confirmation (:func:`prompt_for_confirmation()`) and
|
||||
asking to choose from a list of options (:func:`prompt_for_choice()`). It works
|
||||
by rendering interactive prompts on the terminal.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import logging
|
||||
import sys
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly.compat import interactive_prompt
|
||||
from humanfriendly.terminal import (
|
||||
HIGHLIGHT_COLOR,
|
||||
ansi_strip,
|
||||
ansi_wrap,
|
||||
connected_to_terminal,
|
||||
terminal_supports_colors,
|
||||
warning,
|
||||
)
|
||||
from humanfriendly.text import format, concatenate
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = (
|
||||
'MAX_ATTEMPTS',
|
||||
'TooManyInvalidReplies',
|
||||
'logger',
|
||||
'prepare_friendly_prompts',
|
||||
'prepare_prompt_text',
|
||||
'prompt_for_choice',
|
||||
'prompt_for_confirmation',
|
||||
'prompt_for_input',
|
||||
'retry_limit',
|
||||
)
|
||||
|
||||
MAX_ATTEMPTS = 10
|
||||
"""The number of times an interactive prompt is shown on invalid input (an integer)."""
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def prompt_for_confirmation(question, default=None, padding=True):
|
||||
"""
|
||||
Prompt the user for confirmation.
|
||||
|
||||
:param question: The text that explains what the user is confirming (a string).
|
||||
:param default: The default value (a boolean) or :data:`None`.
|
||||
:param padding: Refer to the documentation of :func:`prompt_for_input()`.
|
||||
:returns: - If the user enters 'yes' or 'y' then :data:`True` is returned.
|
||||
- If the user enters 'no' or 'n' then :data:`False` is returned.
|
||||
- If the user doesn't enter any text or standard input is not
|
||||
connected to a terminal (which makes it impossible to prompt
|
||||
the user) the value of the keyword argument ``default`` is
|
||||
returned (if that value is not :data:`None`).
|
||||
:raises: - Any exceptions raised by :func:`retry_limit()`.
|
||||
- Any exceptions raised by :func:`prompt_for_input()`.
|
||||
|
||||
When `default` is :data:`False` and the user doesn't enter any text an
|
||||
error message is printed and the prompt is repeated:
|
||||
|
||||
>>> prompt_for_confirmation("Are you sure?")
|
||||
<BLANKLINE>
|
||||
Are you sure? [y/n]
|
||||
<BLANKLINE>
|
||||
Error: Please enter 'yes' or 'no' (there's no default choice).
|
||||
<BLANKLINE>
|
||||
Are you sure? [y/n]
|
||||
|
||||
The same thing happens when the user enters text that isn't recognized:
|
||||
|
||||
>>> prompt_for_confirmation("Are you sure?")
|
||||
<BLANKLINE>
|
||||
Are you sure? [y/n] about what?
|
||||
<BLANKLINE>
|
||||
Error: Please enter 'yes' or 'no' (the text 'about what?' is not recognized).
|
||||
<BLANKLINE>
|
||||
Are you sure? [y/n]
|
||||
"""
|
||||
# Generate the text for the prompt.
|
||||
prompt_text = prepare_prompt_text(question, bold=True)
|
||||
# Append the valid replies (and default reply) to the prompt text.
|
||||
hint = "[Y/n]" if default else "[y/N]" if default is not None else "[y/n]"
|
||||
prompt_text += " %s " % prepare_prompt_text(hint, color=HIGHLIGHT_COLOR)
|
||||
# Loop until a valid response is given.
|
||||
logger.debug("Requesting interactive confirmation from terminal: %r", ansi_strip(prompt_text).rstrip())
|
||||
for attempt in retry_limit():
|
||||
reply = prompt_for_input(prompt_text, '', padding=padding, strip=True)
|
||||
if reply.lower() in ('y', 'yes'):
|
||||
logger.debug("Confirmation granted by reply (%r).", reply)
|
||||
return True
|
||||
elif reply.lower() in ('n', 'no'):
|
||||
logger.debug("Confirmation denied by reply (%r).", reply)
|
||||
return False
|
||||
elif (not reply) and default is not None:
|
||||
logger.debug("Default choice selected by empty reply (%r).",
|
||||
"granted" if default else "denied")
|
||||
return default
|
||||
else:
|
||||
details = ("the text '%s' is not recognized" % reply
|
||||
if reply else "there's no default choice")
|
||||
logger.debug("Got %s reply (%s), retrying (%i/%i) ..",
|
||||
"invalid" if reply else "empty", details,
|
||||
attempt, MAX_ATTEMPTS)
|
||||
warning("{indent}Error: Please enter 'yes' or 'no' ({details}).",
|
||||
indent=' ' if padding else '', details=details)
|
||||
|
||||
|
||||
def prompt_for_choice(choices, default=None, padding=True):
|
||||
"""
|
||||
Prompt the user to select a choice from a group of options.
|
||||
|
||||
:param choices: A sequence of strings with available options.
|
||||
:param default: The default choice if the user simply presses Enter
|
||||
(expected to be a string, defaults to :data:`None`).
|
||||
:param padding: Refer to the documentation of
|
||||
:func:`~humanfriendly.prompts.prompt_for_input()`.
|
||||
:returns: The string corresponding to the user's choice.
|
||||
:raises: - :exc:`~exceptions.ValueError` if `choices` is an empty sequence.
|
||||
- Any exceptions raised by
|
||||
:func:`~humanfriendly.prompts.retry_limit()`.
|
||||
- Any exceptions raised by
|
||||
:func:`~humanfriendly.prompts.prompt_for_input()`.
|
||||
|
||||
When no options are given an exception is raised:
|
||||
|
||||
>>> prompt_for_choice([])
|
||||
Traceback (most recent call last):
|
||||
File "humanfriendly/prompts.py", line 148, in prompt_for_choice
|
||||
raise ValueError("Can't prompt for choice without any options!")
|
||||
ValueError: Can't prompt for choice without any options!
|
||||
|
||||
If a single option is given the user isn't prompted:
|
||||
|
||||
>>> prompt_for_choice(['only one choice'])
|
||||
'only one choice'
|
||||
|
||||
Here's what the actual prompt looks like by default:
|
||||
|
||||
>>> prompt_for_choice(['first option', 'second option'])
|
||||
<BLANKLINE>
|
||||
1. first option
|
||||
2. second option
|
||||
<BLANKLINE>
|
||||
Enter your choice as a number or unique substring (Control-C aborts): second
|
||||
<BLANKLINE>
|
||||
'second option'
|
||||
|
||||
If you don't like the whitespace (empty lines and indentation):
|
||||
|
||||
>>> prompt_for_choice(['first option', 'second option'], padding=False)
|
||||
1. first option
|
||||
2. second option
|
||||
Enter your choice as a number or unique substring (Control-C aborts): first
|
||||
'first option'
|
||||
"""
|
||||
indent = ' ' if padding else ''
|
||||
# Make sure we can use 'choices' more than once (i.e. not a generator).
|
||||
choices = list(choices)
|
||||
if len(choices) == 1:
|
||||
# If there's only one option there's no point in prompting the user.
|
||||
logger.debug("Skipping interactive prompt because there's only option (%r).", choices[0])
|
||||
return choices[0]
|
||||
elif not choices:
|
||||
# We can't render a choice prompt without any options.
|
||||
raise ValueError("Can't prompt for choice without any options!")
|
||||
# Generate the prompt text.
|
||||
prompt_text = ('\n\n' if padding else '\n').join([
|
||||
# Present the available choices in a user friendly way.
|
||||
"\n".join([
|
||||
(u" %i. %s" % (i, choice)) + (" (default choice)" if choice == default else "")
|
||||
for i, choice in enumerate(choices, start=1)
|
||||
]),
|
||||
# Instructions for the user.
|
||||
"Enter your choice as a number or unique substring (Control-C aborts): ",
|
||||
])
|
||||
prompt_text = prepare_prompt_text(prompt_text, bold=True)
|
||||
# Loop until a valid choice is made.
|
||||
logger.debug("Requesting interactive choice on terminal (options are %s) ..",
|
||||
concatenate(map(repr, choices)))
|
||||
for attempt in retry_limit():
|
||||
reply = prompt_for_input(prompt_text, '', padding=padding, strip=True)
|
||||
if not reply and default is not None:
|
||||
logger.debug("Default choice selected by empty reply (%r).", default)
|
||||
return default
|
||||
elif reply.isdigit():
|
||||
index = int(reply) - 1
|
||||
if 0 <= index < len(choices):
|
||||
logger.debug("Option (%r) selected by numeric reply (%s).", choices[index], reply)
|
||||
return choices[index]
|
||||
# Check for substring matches.
|
||||
matches = []
|
||||
for choice in choices:
|
||||
lower_reply = reply.lower()
|
||||
lower_choice = choice.lower()
|
||||
if lower_reply == lower_choice:
|
||||
# If we have an 'exact' match we return it immediately.
|
||||
logger.debug("Option (%r) selected by reply (exact match).", choice)
|
||||
return choice
|
||||
elif lower_reply in lower_choice and len(lower_reply) > 0:
|
||||
# Otherwise we gather substring matches.
|
||||
matches.append(choice)
|
||||
if len(matches) == 1:
|
||||
# If a single choice was matched we return it.
|
||||
logger.debug("Option (%r) selected by reply (substring match on %r).", matches[0], reply)
|
||||
return matches[0]
|
||||
else:
|
||||
# Give the user a hint about what went wrong.
|
||||
if matches:
|
||||
details = format("text '%s' matches more than one choice: %s", reply, concatenate(matches))
|
||||
elif reply.isdigit():
|
||||
details = format("number %i is not a valid choice", int(reply))
|
||||
elif reply and not reply.isspace():
|
||||
details = format("text '%s' doesn't match any choices", reply)
|
||||
else:
|
||||
details = "there's no default choice"
|
||||
logger.debug("Got %s reply (%s), retrying (%i/%i) ..",
|
||||
"invalid" if reply else "empty", details,
|
||||
attempt, MAX_ATTEMPTS)
|
||||
warning("%sError: Invalid input (%s).", indent, details)
|
||||
|
||||
|
||||
def prompt_for_input(question, default=None, padding=True, strip=True):
|
||||
"""
|
||||
Prompt the user for input (free form text).
|
||||
|
||||
:param question: An explanation of what is expected from the user (a string).
|
||||
:param default: The return value if the user doesn't enter any text or
|
||||
standard input is not connected to a terminal (which
|
||||
makes it impossible to prompt the user).
|
||||
:param padding: Render empty lines before and after the prompt to make it
|
||||
stand out from the surrounding text? (a boolean, defaults
|
||||
to :data:`True`)
|
||||
:param strip: Strip leading/trailing whitespace from the user's reply?
|
||||
:returns: The text entered by the user (a string) or the value of the
|
||||
`default` argument.
|
||||
:raises: - :exc:`~exceptions.KeyboardInterrupt` when the program is
|
||||
interrupted_ while the prompt is active, for example
|
||||
because the user presses Control-C_.
|
||||
- :exc:`~exceptions.EOFError` when reading from `standard input`_
|
||||
fails, for example because the user presses Control-D_ or
|
||||
because the standard input stream is redirected (only if
|
||||
`default` is :data:`None`).
|
||||
|
||||
.. _Control-C: https://en.wikipedia.org/wiki/Control-C#In_command-line_environments
|
||||
.. _Control-D: https://en.wikipedia.org/wiki/End-of-transmission_character#Meaning_in_Unix
|
||||
.. _interrupted: https://en.wikipedia.org/wiki/Unix_signal#SIGINT
|
||||
.. _standard input: https://en.wikipedia.org/wiki/Standard_streams#Standard_input_.28stdin.29
|
||||
"""
|
||||
prepare_friendly_prompts()
|
||||
reply = None
|
||||
try:
|
||||
# Prefix an empty line to the text and indent by one space?
|
||||
if padding:
|
||||
question = '\n' + question
|
||||
question = question.replace('\n', '\n ')
|
||||
# Render the prompt and wait for the user's reply.
|
||||
try:
|
||||
reply = interactive_prompt(question)
|
||||
finally:
|
||||
if reply is None:
|
||||
# If the user terminated the prompt using Control-C or
|
||||
# Control-D instead of pressing Enter no newline will be
|
||||
# rendered after the prompt's text. The result looks kind of
|
||||
# weird:
|
||||
#
|
||||
# $ python -c 'print(raw_input("Are you sure? "))'
|
||||
# Are you sure? ^CTraceback (most recent call last):
|
||||
# File "<string>", line 1, in <module>
|
||||
# KeyboardInterrupt
|
||||
#
|
||||
# We can avoid this by emitting a newline ourselves if an
|
||||
# exception was raised (signaled by `reply' being None).
|
||||
sys.stderr.write('\n')
|
||||
if padding:
|
||||
# If the caller requested (didn't opt out of) `padding' then we'll
|
||||
# emit a newline regardless of whether an exception is being
|
||||
# handled. This helps to make interactive prompts `stand out' from
|
||||
# a surrounding `wall of text' on the terminal.
|
||||
sys.stderr.write('\n')
|
||||
except BaseException as e:
|
||||
if isinstance(e, EOFError) and default is not None:
|
||||
# If standard input isn't connected to an interactive terminal
|
||||
# but the caller provided a default we'll return that.
|
||||
logger.debug("Got EOF from terminal, returning default value (%r) ..", default)
|
||||
return default
|
||||
else:
|
||||
# Otherwise we log that the prompt was interrupted but propagate
|
||||
# the exception to the caller.
|
||||
logger.warning("Interactive prompt was interrupted by exception!", exc_info=True)
|
||||
raise
|
||||
if default is not None and not reply:
|
||||
# If the reply is empty and `default' is None we don't want to return
|
||||
# None because it's nicer for callers to be able to assume that the
|
||||
# return value is always a string.
|
||||
return default
|
||||
else:
|
||||
return reply.strip()
|
||||
|
||||
|
||||
def prepare_prompt_text(prompt_text, **options):
|
||||
"""
|
||||
Wrap a text to be rendered as an interactive prompt in ANSI escape sequences.
|
||||
|
||||
:param prompt_text: The text to render on the prompt (a string).
|
||||
:param options: Any keyword arguments are passed on to :func:`.ansi_wrap()`.
|
||||
:returns: The resulting prompt text (a string).
|
||||
|
||||
ANSI escape sequences are only used when the standard output stream is
|
||||
connected to a terminal. When the standard input stream is connected to a
|
||||
terminal any escape sequences are wrapped in "readline hints".
|
||||
"""
|
||||
return (ansi_wrap(prompt_text, readline_hints=connected_to_terminal(sys.stdin), **options)
|
||||
if terminal_supports_colors(sys.stdout)
|
||||
else prompt_text)
|
||||
|
||||
|
||||
def prepare_friendly_prompts():
|
||||
u"""
|
||||
Make interactive prompts more user friendly.
|
||||
|
||||
The prompts presented by :func:`python2:raw_input()` (in Python 2) and
|
||||
:func:`python3:input()` (in Python 3) are not very user friendly by
|
||||
default, for example the cursor keys (:kbd:`←`, :kbd:`↑`, :kbd:`→` and
|
||||
:kbd:`↓`) and the :kbd:`Home` and :kbd:`End` keys enter characters instead
|
||||
of performing the action you would expect them to. By simply importing the
|
||||
:mod:`readline` module these prompts become much friendlier (as mentioned
|
||||
in the Python standard library documentation).
|
||||
|
||||
This function is called by the other functions in this module to enable
|
||||
user friendly prompts.
|
||||
"""
|
||||
try:
|
||||
import readline # NOQA
|
||||
except ImportError:
|
||||
# might not be available on Windows if pyreadline isn't installed
|
||||
pass
|
||||
|
||||
|
||||
def retry_limit(limit=MAX_ATTEMPTS):
|
||||
"""
|
||||
Allow the user to provide valid input up to `limit` times.
|
||||
|
||||
:param limit: The maximum number of attempts (a number,
|
||||
defaults to :data:`MAX_ATTEMPTS`).
|
||||
:returns: A generator of numbers starting from one.
|
||||
:raises: :exc:`TooManyInvalidReplies` when an interactive prompt
|
||||
receives repeated invalid input (:data:`MAX_ATTEMPTS`).
|
||||
|
||||
This function returns a generator for interactive prompts that want to
|
||||
repeat on invalid input without getting stuck in infinite loops.
|
||||
"""
|
||||
for i in range(limit):
|
||||
yield i + 1
|
||||
msg = "Received too many invalid replies on interactive prompt, giving up! (tried %i times)"
|
||||
formatted_msg = msg % limit
|
||||
# Make sure the event is logged.
|
||||
logger.warning(formatted_msg)
|
||||
# Force the caller to decide what to do now.
|
||||
raise TooManyInvalidReplies(formatted_msg)
|
||||
|
||||
|
||||
class TooManyInvalidReplies(Exception):
|
||||
|
||||
"""Raised by interactive prompts when they've received too many invalid inputs."""
|
||||
@@ -0,0 +1,315 @@
|
||||
# Human friendly input/output in Python.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: June 11, 2021
|
||||
# URL: https://humanfriendly.readthedocs.io
|
||||
|
||||
"""
|
||||
Customizations for and integration with the Sphinx_ documentation generator.
|
||||
|
||||
The :mod:`humanfriendly.sphinx` module uses the `Sphinx extension API`_ to
|
||||
customize the process of generating Sphinx based Python documentation. To
|
||||
explore the functionality this module offers its best to start reading
|
||||
from the :func:`setup()` function.
|
||||
|
||||
.. _Sphinx: http://www.sphinx-doc.org/
|
||||
.. _Sphinx extension API: http://sphinx-doc.org/extdev/appapi.html
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import logging
|
||||
import types
|
||||
|
||||
# External dependencies (if Sphinx is installed docutils will be installed).
|
||||
import docutils.nodes
|
||||
import docutils.utils
|
||||
|
||||
# Modules included in our package.
|
||||
from humanfriendly.deprecation import get_aliases
|
||||
from humanfriendly.text import compact, dedent, format
|
||||
from humanfriendly.usage import USAGE_MARKER, render_usage
|
||||
|
||||
# Public identifiers that require documentation.
|
||||
__all__ = (
|
||||
"deprecation_note_callback",
|
||||
"enable_deprecation_notes",
|
||||
"enable_man_role",
|
||||
"enable_pypi_role",
|
||||
"enable_special_methods",
|
||||
"enable_usage_formatting",
|
||||
"logger",
|
||||
"man_role",
|
||||
"pypi_role",
|
||||
"setup",
|
||||
"special_methods_callback",
|
||||
"usage_message_callback",
|
||||
)
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def deprecation_note_callback(app, what, name, obj, options, lines):
|
||||
"""
|
||||
Automatically document aliases defined using :func:`~humanfriendly.deprecation.define_aliases()`.
|
||||
|
||||
Refer to :func:`enable_deprecation_notes()` to enable the use of this
|
||||
function (you probably don't want to call :func:`deprecation_note_callback()`
|
||||
directly).
|
||||
|
||||
This function implements a callback for ``autodoc-process-docstring`` that
|
||||
reformats module docstrings to append an overview of aliases defined by the
|
||||
module.
|
||||
|
||||
The parameters expected by this function are those defined for Sphinx event
|
||||
callback functions (i.e. I'm not going to document them here :-).
|
||||
"""
|
||||
if isinstance(obj, types.ModuleType) and lines:
|
||||
aliases = get_aliases(obj.__name__)
|
||||
if aliases:
|
||||
# Convert the existing docstring to a string and remove leading
|
||||
# indentation from that string, otherwise our generated content
|
||||
# would have to match the existing indentation in order not to
|
||||
# break docstring parsing (because indentation is significant
|
||||
# in the reStructuredText format).
|
||||
blocks = [dedent("\n".join(lines))]
|
||||
# Use an admonition to group the deprecated aliases together and
|
||||
# to distinguish them from the autodoc entries that follow.
|
||||
blocks.append(".. note:: Deprecated names")
|
||||
indent = " " * 3
|
||||
if len(aliases) == 1:
|
||||
explanation = """
|
||||
The following alias exists to preserve backwards compatibility,
|
||||
however a :exc:`~exceptions.DeprecationWarning` is triggered
|
||||
when it is accessed, because this alias will be removed
|
||||
in a future release.
|
||||
"""
|
||||
else:
|
||||
explanation = """
|
||||
The following aliases exist to preserve backwards compatibility,
|
||||
however a :exc:`~exceptions.DeprecationWarning` is triggered
|
||||
when they are accessed, because these aliases will be
|
||||
removed in a future release.
|
||||
"""
|
||||
blocks.append(indent + compact(explanation))
|
||||
for name, target in aliases.items():
|
||||
blocks.append(format("%s.. data:: %s", indent, name))
|
||||
blocks.append(format("%sAlias for :obj:`%s`.", indent * 2, target))
|
||||
update_lines(lines, "\n\n".join(blocks))
|
||||
|
||||
|
||||
def enable_deprecation_notes(app):
|
||||
"""
|
||||
Enable documenting backwards compatibility aliases using the autodoc_ extension.
|
||||
|
||||
:param app: The Sphinx application object.
|
||||
|
||||
This function connects the :func:`deprecation_note_callback()` function to
|
||||
``autodoc-process-docstring`` events.
|
||||
|
||||
.. _autodoc: http://www.sphinx-doc.org/en/stable/ext/autodoc.html
|
||||
"""
|
||||
app.connect("autodoc-process-docstring", deprecation_note_callback)
|
||||
|
||||
|
||||
def enable_man_role(app):
|
||||
"""
|
||||
Enable the ``:man:`` role for linking to Debian Linux manual pages.
|
||||
|
||||
:param app: The Sphinx application object.
|
||||
|
||||
This function registers the :func:`man_role()` function to handle the
|
||||
``:man:`` role.
|
||||
"""
|
||||
app.add_role("man", man_role)
|
||||
|
||||
|
||||
def enable_pypi_role(app):
|
||||
"""
|
||||
Enable the ``:pypi:`` role for linking to the Python Package Index.
|
||||
|
||||
:param app: The Sphinx application object.
|
||||
|
||||
This function registers the :func:`pypi_role()` function to handle the
|
||||
``:pypi:`` role.
|
||||
"""
|
||||
app.add_role("pypi", pypi_role)
|
||||
|
||||
|
||||
def enable_special_methods(app):
|
||||
"""
|
||||
Enable documenting "special methods" using the autodoc_ extension.
|
||||
|
||||
:param app: The Sphinx application object.
|
||||
|
||||
This function connects the :func:`special_methods_callback()` function to
|
||||
``autodoc-skip-member`` events.
|
||||
|
||||
.. _autodoc: http://www.sphinx-doc.org/en/stable/ext/autodoc.html
|
||||
"""
|
||||
app.connect("autodoc-skip-member", special_methods_callback)
|
||||
|
||||
|
||||
def enable_usage_formatting(app):
|
||||
"""
|
||||
Reformat human friendly usage messages to reStructuredText_.
|
||||
|
||||
:param app: The Sphinx application object (as given to ``setup()``).
|
||||
|
||||
This function connects the :func:`usage_message_callback()` function to
|
||||
``autodoc-process-docstring`` events.
|
||||
|
||||
.. _reStructuredText: https://en.wikipedia.org/wiki/ReStructuredText
|
||||
"""
|
||||
app.connect("autodoc-process-docstring", usage_message_callback)
|
||||
|
||||
|
||||
def man_role(role, rawtext, text, lineno, inliner, options={}, content=[]):
|
||||
"""
|
||||
Convert a Linux manual topic to a hyperlink.
|
||||
|
||||
Using the ``:man:`` role is very simple, here's an example:
|
||||
|
||||
.. code-block:: rst
|
||||
|
||||
See the :man:`python` documentation.
|
||||
|
||||
This results in the following:
|
||||
|
||||
See the :man:`python` documentation.
|
||||
|
||||
As the example shows you can use the role inline, embedded in sentences of
|
||||
text. In the generated documentation the ``:man:`` text is omitted and a
|
||||
hyperlink pointing to the Debian Linux manual pages is emitted.
|
||||
"""
|
||||
man_url = "https://manpages.debian.org/%s" % text
|
||||
reference = docutils.nodes.reference(rawtext, docutils.utils.unescape(text), refuri=man_url, **options)
|
||||
return [reference], []
|
||||
|
||||
|
||||
def pypi_role(role, rawtext, text, lineno, inliner, options={}, content=[]):
|
||||
"""
|
||||
Generate hyperlinks to the Python Package Index.
|
||||
|
||||
Using the ``:pypi:`` role is very simple, here's an example:
|
||||
|
||||
.. code-block:: rst
|
||||
|
||||
See the :pypi:`humanfriendly` package.
|
||||
|
||||
This results in the following:
|
||||
|
||||
See the :pypi:`humanfriendly` package.
|
||||
|
||||
As the example shows you can use the role inline, embedded in sentences of
|
||||
text. In the generated documentation the ``:pypi:`` text is omitted and a
|
||||
hyperlink pointing to the Python Package Index is emitted.
|
||||
"""
|
||||
pypi_url = "https://pypi.org/project/%s/" % text
|
||||
reference = docutils.nodes.reference(rawtext, docutils.utils.unescape(text), refuri=pypi_url, **options)
|
||||
return [reference], []
|
||||
|
||||
|
||||
def setup(app):
|
||||
"""
|
||||
Enable all of the provided Sphinx_ customizations.
|
||||
|
||||
:param app: The Sphinx application object.
|
||||
|
||||
The :func:`setup()` function makes it easy to enable all of the Sphinx
|
||||
customizations provided by the :mod:`humanfriendly.sphinx` module with the
|
||||
least amount of code. All you need to do is to add the module name to the
|
||||
``extensions`` variable in your ``conf.py`` file:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Sphinx extension module names.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.doctest',
|
||||
'sphinx.ext.intersphinx',
|
||||
'humanfriendly.sphinx',
|
||||
]
|
||||
|
||||
When Sphinx sees the :mod:`humanfriendly.sphinx` name it will import the
|
||||
module and call its :func:`setup()` function. This function will then call
|
||||
the following:
|
||||
|
||||
- :func:`enable_deprecation_notes()`
|
||||
- :func:`enable_man_role()`
|
||||
- :func:`enable_pypi_role()`
|
||||
- :func:`enable_special_methods()`
|
||||
- :func:`enable_usage_formatting()`
|
||||
|
||||
Of course more functionality may be added at a later stage. If you don't
|
||||
like that idea you may be better of calling the individual functions from
|
||||
your own ``setup()`` function.
|
||||
"""
|
||||
from humanfriendly import __version__
|
||||
|
||||
enable_deprecation_notes(app)
|
||||
enable_man_role(app)
|
||||
enable_pypi_role(app)
|
||||
enable_special_methods(app)
|
||||
enable_usage_formatting(app)
|
||||
|
||||
return dict(parallel_read_safe=True, parallel_write_safe=True, version=__version__)
|
||||
|
||||
|
||||
def special_methods_callback(app, what, name, obj, skip, options):
|
||||
"""
|
||||
Enable documenting "special methods" using the autodoc_ extension.
|
||||
|
||||
Refer to :func:`enable_special_methods()` to enable the use of this
|
||||
function (you probably don't want to call
|
||||
:func:`special_methods_callback()` directly).
|
||||
|
||||
This function implements a callback for ``autodoc-skip-member`` events to
|
||||
include documented "special methods" (method names with two leading and two
|
||||
trailing underscores) in your documentation. The result is similar to the
|
||||
use of the ``special-members`` flag with one big difference: Special
|
||||
methods are included but other types of members are ignored. This means
|
||||
that attributes like ``__weakref__`` will always be ignored (this was my
|
||||
main annoyance with the ``special-members`` flag).
|
||||
|
||||
The parameters expected by this function are those defined for Sphinx event
|
||||
callback functions (i.e. I'm not going to document them here :-).
|
||||
"""
|
||||
if getattr(obj, "__doc__", None) and isinstance(obj, (types.FunctionType, types.MethodType)):
|
||||
return False
|
||||
else:
|
||||
return skip
|
||||
|
||||
|
||||
def update_lines(lines, text):
|
||||
"""Private helper for ``autodoc-process-docstring`` callbacks."""
|
||||
while lines:
|
||||
lines.pop()
|
||||
lines.extend(text.splitlines())
|
||||
|
||||
|
||||
def usage_message_callback(app, what, name, obj, options, lines):
|
||||
"""
|
||||
Reformat human friendly usage messages to reStructuredText_.
|
||||
|
||||
Refer to :func:`enable_usage_formatting()` to enable the use of this
|
||||
function (you probably don't want to call :func:`usage_message_callback()`
|
||||
directly).
|
||||
|
||||
This function implements a callback for ``autodoc-process-docstring`` that
|
||||
reformats module docstrings using :func:`.render_usage()` so that Sphinx
|
||||
doesn't mangle usage messages that were written to be human readable
|
||||
instead of machine readable. Only module docstrings whose first line starts
|
||||
with :data:`.USAGE_MARKER` are reformatted.
|
||||
|
||||
The parameters expected by this function are those defined for Sphinx event
|
||||
callback functions (i.e. I'm not going to document them here :-).
|
||||
"""
|
||||
# Make sure we only modify the docstrings of modules.
|
||||
if isinstance(obj, types.ModuleType) and lines:
|
||||
# Make sure we only modify docstrings containing a usage message.
|
||||
if lines[0].startswith(USAGE_MARKER):
|
||||
# Convert the usage message to reStructuredText.
|
||||
text = render_usage("\n".join(lines))
|
||||
# Fill up the buffer with our modified docstring.
|
||||
update_lines(lines, text)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user