switching to high quality piper tts and added label translations
This commit is contained in:
@@ -0,0 +1,4 @@
|
||||
TAPO_USER=admin
|
||||
TAPO_PASSWORD=your_password_here
|
||||
# Später für andere Dienste
|
||||
PORT=8000
|
||||
+11
@@ -0,0 +1,11 @@
|
||||
__pycache__/
|
||||
*.pyc
|
||||
.env
|
||||
venv/
|
||||
.DS_Store
|
||||
|
||||
# Snapshots und Modelle
|
||||
snapshots/*.jpg
|
||||
snapshots/*.png
|
||||
models/*.pt
|
||||
!models/.gitkeep
|
||||
+16
@@ -0,0 +1,16 @@
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Systemabhängigkeiten für OpenCV etc.
|
||||
RUN apt-get update && apt-get install -y \
|
||||
libgl1-mesa-glx \
|
||||
libglib2.0-0 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
@@ -0,0 +1,57 @@
|
||||
# Projekt: Home-Bro – Der nervige Mitbewohner
|
||||
|
||||
## 🎯 Vision
|
||||
|
||||
Ein KI-basiertes Überwachungssystem mit Persönlichkeit. Home-Bro beobachtet Räume über verschiedene Kameras (Satelliten) und kommentiert den Zustand (z. B. Unordnung) auf sarkastische und proaktive Weise.
|
||||
|
||||
## 🏗 Architektur (Distributed System)
|
||||
|
||||
### 1. Das Gehirn (Brain) - Repository: `home-bro-brain`
|
||||
|
||||
- **Hardware:** AMD Ryzen 7500 (PC)
|
||||
- **Umgebung:** Docker-Container (FastAPI + YOLOv11)
|
||||
- **Aufgabe:** Zentrale Bildanalyse, Objekterkennung (Tassen, Teller, Flaschen) und Generierung sarkastischer Sprüche.
|
||||
|
||||
### 2. Die Satelliten (Clients) - Repository: `home-bro-client`
|
||||
|
||||
- **Primär-Satellit:** Raspberry Pi 4 mit Logitech C922 Webcam.
|
||||
- **Input:** Wake-Word "Porcupine", Audio-Aufnahme, 1080p Snapshots.
|
||||
- **Output:** Audio-Wiedergabe über Bluetooth-Speaker (CSL).
|
||||
- **Video-Satelliten:** - TP-Link Tapo C220 (Anbindung via RTSP/OpenCV).
|
||||
- ESP32-CAM Module (OV3660/OV2640) für Nebenräume.
|
||||
|
||||
## 🛠 Tech-Stack
|
||||
|
||||
- **KI:** Ultralytics YOLOv11 (Objekterkennung).
|
||||
- **Backend:** FastAPI (Python) für die zentrale API.
|
||||
- **Kommunikation:** REST (Push vom Pi) & RTSP (Pull von Tapo).
|
||||
- **Editor:** Antigravity (Gitea-Anbindung).
|
||||
- **Doku:** NotebookLM für Strategie & Projekt-Gedächtnis.
|
||||
|
||||
## 📋 Aktueller Entwicklungs-Fahrplan
|
||||
|
||||
### Phase 1: Brain-Fundament (Lokal auf Mac/PC)
|
||||
|
||||
- [x] Repository-Struktur anlegen.
|
||||
- [x] `requirements.txt` definieren.
|
||||
- [x] `app/vision.py` mit YOLO-Inferenz und Sarkasmus-Logik implementieren.
|
||||
- [x] `app/main.py` als API-Endpunkt fertigstellen.
|
||||
- [x] Lokaler Test via FastAPI Swagger-UI (`/docs`) oder Test-Script.
|
||||
|
||||
### Phase 2: Client-Anbindung (Pi & Webcam)
|
||||
|
||||
- [ ] `home-bro-client` Repository erstellen.
|
||||
- [ ] Snapshot-Logik auf dem Pi implementieren (`fswebcam` oder `libcamera`).
|
||||
- [ ] Wake-Word Trigger mit Bild-Upload an das Brain verknüpfen.
|
||||
|
||||
### Phase 3: Erweiterung & "Nerv-Faktor"
|
||||
|
||||
- [ ] "Gedächtnis"-Funktion (Status-Tracking: Wie lange steht die Tasse schon dort?).
|
||||
- [ ] Sprachausgabe (TTS) der sarkastischen Sprüche am Pi.
|
||||
- [ ] (Optional) Tapo C220 via RTSP integrieren (sobald Hardware vorhanden).
|
||||
|
||||
## 📝 Notizen für NotebookLM
|
||||
|
||||
- **Persönlichkeit:** Sarkastisch, leicht passiv-aggressiv, fokusiert auf Ordnung.
|
||||
- **Hardware-Hack:** Bluetooth-Speaker am Pi benötigt stabilen Akku/Stromversorgung, um Lastspitzen bei Speaker-Nutzung abzufangen.
|
||||
- **Workflow:** Coding in Antigravity -> Push zu Gitea -> Deployment auf Ryzen/Pi.
|
||||
@@ -0,0 +1,97 @@
|
||||
# Home-Bro Brain (The Brain) 🧠
|
||||
|
||||
Das Gehirn des Home-Bro Projekts. Diese zentrale API ist für die Bildanalyse mittels YOLOv11 und die Generierung sarkastischer Kommentare zuständig.
|
||||
|
||||
## 🚀 Übersicht
|
||||
|
||||
Home-Bro Brain empfängt Bilder von Satelliten (Raspberry Pi, Tapo-Kameras, ESP32), analysiert diese auf Unordnung (Tassen, Flaschen, etc.) und gibt daraufhin einen passiv-aggressiven Kommentar zurück.
|
||||
|
||||
## 🛠 Tech-Stack
|
||||
|
||||
- **FastAPI**: Für die Bereitstellung der API.
|
||||
- **YOLOv11 (Ultralytics)**: Für die Objekterkennung.
|
||||
- **OpenCV**: Zur Verarbeitung von Kamera-Streams.
|
||||
- **Docker**: Optional für das Deployment in einem Container.
|
||||
|
||||
---
|
||||
|
||||
## 💻 Lokale Installation (Mac/PC)
|
||||
|
||||
Folge diesen Schritten, um das Brain lokal auf deinem Rechner zu testen:
|
||||
|
||||
### 1. Repository klonen
|
||||
|
||||
```bash
|
||||
git clone git@git.hnrx.net:homelab/home-bro-brain.git
|
||||
cd home-bro-brain
|
||||
```
|
||||
|
||||
### 2. Virtuelle Umgebung erstellen & aktivieren
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
### 3. Abhängigkeiten installieren
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 4. Konfiguration
|
||||
|
||||
Erstelle eine `.env` Datei (nutze `.env.example` als Vorlage):
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Füge deine Zugangsdaten für ggf. vorhandene Tapo-Kameras hinzu:
|
||||
|
||||
- `TAPO_USER`: Dein Kamera-Benutzername
|
||||
- `TAPO_PASSWORD`: Dein Kamera-Passwort
|
||||
|
||||
### 5. Modell herunterladen
|
||||
|
||||
Das YOLO-Modell (`yolo11n.pt`) wird beim ersten Start automatisch heruntergeladen und im Ordner `models/` abgelegt.
|
||||
|
||||
---
|
||||
|
||||
## 🏃♂️ Ausführung
|
||||
|
||||
Starte den Server lokal:
|
||||
|
||||
```bash
|
||||
uvicorn app.main:app --host 0.0.0.0 --reload --reload-dir app
|
||||
```
|
||||
|
||||
Das Brain ist nun unter `http://localhost:8000` erreichbar.
|
||||
|
||||
### Lokaler Test via Swagger UI
|
||||
|
||||
Du kannst die API direkt im Browser testen. Öffne dazu:
|
||||
`http://localhost:8000/docs`
|
||||
|
||||
Dort kannst du den Endpunkt `/analyze/pi` nutzen, um manuell ein Foto hochzuladen und die Reaktion des Home-Bro zu sehen.
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Deployment (Optional)
|
||||
|
||||
Wenn du das Brain dauerhaft (z.B. auf einem Server) betreiben willst:
|
||||
|
||||
```bash
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📡 API Endpunkte
|
||||
|
||||
| Methode | Endpunkt | Beschreibung |
|
||||
| :------ | :--------------------- | :--------------------------------------------- |
|
||||
| `GET` | `/` | Zeigt das Web-Dashboard an. |
|
||||
| `GET` | `/api/latest` | Gibt den letzten Analysestatus zurück. |
|
||||
| `POST` | `/analyze/pi` | Empfängt ein Bild von einem Pi-Satelliten. |
|
||||
| `GET` | `/analyze/tapo/{room}` | Triggert eine Analyse eines Tapo RTSP-Streams. |
|
||||
+138
@@ -0,0 +1,138 @@
|
||||
from fastapi import FastAPI, UploadFile, File, Request
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.responses import HTMLResponse, FileResponse
|
||||
import cv2
|
||||
import os
|
||||
import time
|
||||
import socket
|
||||
import sys
|
||||
from .vision import analyze_image_with_yolo
|
||||
|
||||
app = FastAPI(title="Home-Bro Brain")
|
||||
|
||||
import subprocess
|
||||
|
||||
def play_voice(text: str):
|
||||
"""Generates speech using Piper and plays it on Mac."""
|
||||
def _speak():
|
||||
try:
|
||||
print(f"DEBUG: Piper TTS für: '{text}'")
|
||||
# Pfad zum Piper Binary im Python 3.12 venv (funktioniert auf M-Mac)
|
||||
piper_path = "./venv_piper/bin/piper"
|
||||
model_path = "./piper/models/de_DE-thorsten-high.onnx"
|
||||
|
||||
# Piper auf Mac nutzt afplay für die Ausgabe
|
||||
# Wir streamen stdout direkt zu afplay
|
||||
command = (
|
||||
f"echo '{text}' | "
|
||||
f"{piper_path} --model {model_path} --output-raw | "
|
||||
f"afplay --channels 1 --rate 22050 --format linear_pcm --bits 16"
|
||||
)
|
||||
|
||||
# Da afplay kein 'raw' Format direkt von stdin im richtigen Takt nimmt,
|
||||
# ist es sicherer, kurz ein WAV zu schreiben.
|
||||
wav_path = "snapshots/speech.wav"
|
||||
gen_command = (
|
||||
f"echo '{text}' | "
|
||||
f"{piper_path} --model {model_path} --output_file {wav_path}"
|
||||
)
|
||||
|
||||
subprocess.run(gen_command, shell=True, check=True)
|
||||
|
||||
if sys.platform == "darwin":
|
||||
subprocess.run(["afplay", wav_path], check=True)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Piper TTS Fehler: {e}")
|
||||
|
||||
# In einem separaten Thread ausführen
|
||||
import threading
|
||||
threading.Thread(target=_speak).start()
|
||||
|
||||
# Statische Dateien (Frontend)
|
||||
app.mount("/static", StaticFiles(directory="static"), name="static")
|
||||
app.mount("/snapshots", StaticFiles(directory="snapshots"), name="snapshots")
|
||||
|
||||
# In-Memory Speicher für den letzten Status
|
||||
latest_status = {
|
||||
"room": "Warten...",
|
||||
"comment": "Noch keine Daten empfangen.",
|
||||
"timestamp": None,
|
||||
"image_url": None
|
||||
}
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def get_index():
|
||||
return FileResponse("static/index.html")
|
||||
|
||||
@app.get("/api/latest")
|
||||
async def get_latest():
|
||||
return latest_status
|
||||
|
||||
@app.get("/api/info")
|
||||
async def get_info():
|
||||
hostname = socket.gethostname()
|
||||
try:
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
s.connect(("8.8.8.8", 80))
|
||||
local_ip = s.getsockname()[0]
|
||||
s.close()
|
||||
except Exception:
|
||||
local_ip = "127.0.0.1"
|
||||
|
||||
return {
|
||||
"hostname": hostname,
|
||||
"ip": local_ip,
|
||||
"port": 8000
|
||||
}
|
||||
|
||||
@app.post("/analyze/pi")
|
||||
async def analyze_pi(file: UploadFile = File(...)):
|
||||
global latest_status
|
||||
# Bild vom Pi speichern
|
||||
path = "snapshots/pi_last.jpg"
|
||||
content = await file.read()
|
||||
with open(path, "wb") as f:
|
||||
f.write(content)
|
||||
|
||||
result = analyze_image_with_yolo(path)
|
||||
play_voice(result)
|
||||
|
||||
# Status aktualisieren
|
||||
latest_status = {
|
||||
"room": "Wohnzimmer (Pi)",
|
||||
"comment": result,
|
||||
"timestamp": time.strftime("%H:%M:%S"),
|
||||
"image_url": f"/snapshots/pi_last.jpg?t={int(time.time())}" # Cache busting
|
||||
}
|
||||
|
||||
return {"room": "Wohnzimmer", "comment": result}
|
||||
|
||||
@app.get("/analyze/tapo/{room_name}")
|
||||
async def analyze_tapo(room_name: str, ip: str):
|
||||
global latest_status
|
||||
# RTSP Zugriff auf die Tapo
|
||||
user = os.getenv("TAPO_USER")
|
||||
pw = os.getenv("TAPO_PASSWORD")
|
||||
url = f"rtsp://{user}:{pw}@{ip}:554/stream1"
|
||||
|
||||
cap = cv2.VideoCapture(url)
|
||||
ret, frame = cap.read()
|
||||
if ret:
|
||||
path = f"snapshots/{room_name}_tapo.jpg"
|
||||
cv2.imwrite(path, frame)
|
||||
result = analyze_image_with_yolo(path)
|
||||
play_voice(result)
|
||||
cap.release()
|
||||
|
||||
latest_status = {
|
||||
"room": room_name,
|
||||
"comment": result,
|
||||
"timestamp": time.strftime("%H:%M:%S"),
|
||||
"image_url": f"/snapshots/{room_name}_tapo.jpg?t={int(time.time())}"
|
||||
}
|
||||
|
||||
return {"room": room_name, "comment": result}
|
||||
|
||||
cap.release()
|
||||
return {"error": "Tapo nicht erreichbar"}
|
||||
@@ -0,0 +1,73 @@
|
||||
from ultralytics import YOLO
|
||||
import random
|
||||
|
||||
# Lädt das Modell (Nano-Version ist schnell & reicht für Tassen/Teller)
|
||||
model = YOLO("models/yolo11n.pt")
|
||||
|
||||
# Deutsch-Übersetzung für gängige Objekte
|
||||
label_map = {
|
||||
"cup": "eine Tasse",
|
||||
"bottle": "eine Flasche",
|
||||
"cell phone": "ein Handy",
|
||||
"person": "einen Menschen (vermutlich Matthias, der schon wieder nichts tut)",
|
||||
"laptop": "einen Laptop",
|
||||
"chair": "einen Stuhl",
|
||||
"remote": "eine Fernbedienung",
|
||||
"keyboard": "eine Tastatur",
|
||||
"mouse": "eine Maus",
|
||||
"bicycle": "ein Fahrrad"
|
||||
}
|
||||
|
||||
def analyze_image_with_yolo(image_path):
|
||||
results = model.predict(source=image_path, conf=0.45)
|
||||
|
||||
found_items = []
|
||||
for result in results:
|
||||
for box in result.boxes:
|
||||
label = result.names[int(box.cls)]
|
||||
found_items.append(label)
|
||||
|
||||
description = generate_description(found_items)
|
||||
insult = generate_insult(found_items)
|
||||
|
||||
return f"{description} {insult}"
|
||||
|
||||
def generate_description(items):
|
||||
if not items:
|
||||
return "Ich sehe absolut gar nichts."
|
||||
|
||||
translated_items = [label_map.get(item, f"ein {item}") for item in set(items)]
|
||||
|
||||
if len(translated_items) == 1:
|
||||
return f"Ich sehe {translated_items[0]}."
|
||||
|
||||
last_item = translated_items.pop()
|
||||
return f"Ich sehe {', '.join(translated_items)} und {last_item}."
|
||||
|
||||
def generate_insult(items):
|
||||
# Mapping von Objekten zu nervigen Kommentaren
|
||||
insults = {
|
||||
"cup": [
|
||||
"Matthias, die Tasse auf dem Tisch hat mittlerweile ein eigenes Ökosystem. Räum sie weg!",
|
||||
"Ist das Kunst oder kann das weg? Ich meine die Tasse, Matthias.",
|
||||
"Noch eine Tasse? Willst du ein Café eröffnen oder bist du einfach nur faul?"
|
||||
],
|
||||
"bottle": [
|
||||
"Leergut gehört in die Kiste, nicht in mein Sichtfeld.",
|
||||
"Flasche leer, Kopf leer? Bring das Ding weg."
|
||||
],
|
||||
"cell phone": [
|
||||
"Schon wieder am Handy? Kein Wunder, dass hier nichts vorangeht.",
|
||||
"Digital Detox würde dir gut tun, Matthias. Leg das Ding weg."
|
||||
]
|
||||
}
|
||||
|
||||
# Schauen, ob wir was zum Meckern finden
|
||||
for item, phrases in insults.items():
|
||||
if item in items:
|
||||
return random.choice(phrases)
|
||||
|
||||
if not items:
|
||||
return "Ich bin mir aber sicher, du hast irgendwo Dreck versteckt."
|
||||
|
||||
return "Eigentlich sieht alles okay aus... Das macht mich erst recht misstrauisch."
|
||||
@@ -0,0 +1,15 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
home-bro-brain:
|
||||
build: .
|
||||
container_name: home-bro-brain
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- .:/app
|
||||
- ./snapshots:/app/snapshots
|
||||
- ./models:/app/models
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
restart: unless-stopped
|
||||
@@ -0,0 +1,13 @@
|
||||
# Webserver & API
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
python-multipart
|
||||
|
||||
# KI & Bildverarbeitung
|
||||
ultralytics
|
||||
opencv-python-headless
|
||||
pillow
|
||||
|
||||
# Netzwerk & Utilities
|
||||
requests
|
||||
python-dotenv
|
||||
Binary file not shown.
@@ -0,0 +1,70 @@
|
||||
<!doctype html>
|
||||
<html lang="de">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Home-Bro | Dashboard</title>
|
||||
<link rel="stylesheet" href="/static/style.css" />
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=Outfit:wght@300;400;600;800&display=swap"
|
||||
rel="stylesheet"
|
||||
/>
|
||||
</head>
|
||||
<body>
|
||||
<div class="background-blobs">
|
||||
<div class="blob blob-1"></div>
|
||||
<div class="blob blob-2"></div>
|
||||
</div>
|
||||
|
||||
<main class="dashboard">
|
||||
<header>
|
||||
<h1>Home-Bro <span class="accent-text">Brain</span></h1>
|
||||
<div id="status-indicator" class="status-badge">Live</div>
|
||||
</header>
|
||||
|
||||
<section class="main-grid">
|
||||
<div class="card live-view-card">
|
||||
<div class="card-header">
|
||||
<h2>Letzter Snapshot</h2>
|
||||
<span id="last-update" class="timestamp">-</span>
|
||||
</div>
|
||||
<div class="image-container">
|
||||
<img
|
||||
id="latest-image"
|
||||
src=""
|
||||
alt="Warte auf Bild..."
|
||||
class="fallback-img"
|
||||
/>
|
||||
<div id="detection-overlay" class="overlay"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="card info-card">
|
||||
<div class="card-header">
|
||||
<h2>Analyse</h2>
|
||||
<span id="room-name" class="room-label">Unbekannt</span>
|
||||
</div>
|
||||
<div class="analysis-content">
|
||||
<p id="analysis-text">Warte auf Daten vom Satelliten...</p>
|
||||
</div>
|
||||
<div class="system-stats">
|
||||
<div class="stat">
|
||||
<span class="label">Status</span>
|
||||
<span class="value green">Online</span>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<span class="label">IP Adresse</span>
|
||||
<span id="brain-ip" class="value">Lade...</span>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<span class="label">Hostname</span>
|
||||
<span id="brain-host" class="value">Lade...</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
</main>
|
||||
|
||||
<script src="/static/script.js?v=2"></script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,38 @@
|
||||
async function fetchInfo() {
|
||||
try {
|
||||
const response = await fetch('/api/info');
|
||||
const data = await response.json();
|
||||
document.getElementById('brain-ip').textContent = `${data.ip}:${data.port}`;
|
||||
document.getElementById('brain-host').textContent = data.hostname;
|
||||
} catch (error) {
|
||||
console.error('Fehler beim Abrufen der Info:', error);
|
||||
}
|
||||
}
|
||||
|
||||
async function updateDashboard() {
|
||||
try {
|
||||
const response = await fetch('/api/latest');
|
||||
const data = await response.json();
|
||||
|
||||
if (data.timestamp) {
|
||||
document.getElementById('last-update').textContent = data.timestamp;
|
||||
document.getElementById('room-name').textContent = data.room;
|
||||
document.getElementById('analysis-text').textContent = data.comment;
|
||||
|
||||
const img = document.getElementById('latest-image');
|
||||
if (data.image_url) {
|
||||
img.src = data.image_url;
|
||||
img.classList.remove('fallback-img');
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Fehler beim Abrufen der Daten:', error);
|
||||
}
|
||||
}
|
||||
|
||||
// Alle 2 Sekunden aktualisieren
|
||||
setInterval(updateDashboard, 2000);
|
||||
|
||||
// Initialer Aufruf
|
||||
fetchInfo();
|
||||
updateDashboard();
|
||||
@@ -0,0 +1,240 @@
|
||||
:root {
|
||||
--bg-color: #05070a;
|
||||
--card-bg: rgba(255, 255, 255, 0.03);
|
||||
--border-color: rgba(255, 255, 255, 0.1);
|
||||
--accent-color: #38bdf8;
|
||||
--text-primary: #f8fafc;
|
||||
--text-secondary: #94a3b8;
|
||||
--glass-blur: blur(12px);
|
||||
--neon-shadow: 0 0 20px rgba(56, 189, 248, 0.2);
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: "Outfit", sans-serif;
|
||||
background-color: var(--bg-color);
|
||||
color: var(--text-primary);
|
||||
overflow-x: hidden;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
/* Background Animated Blobs */
|
||||
.background-blobs {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
z-index: -1;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.blob {
|
||||
position: absolute;
|
||||
border-radius: 50%;
|
||||
filter: blur(80px);
|
||||
opacity: 0.15;
|
||||
animation: move 20s infinite alternate;
|
||||
}
|
||||
|
||||
.blob-1 {
|
||||
width: 400px;
|
||||
height: 400px;
|
||||
background: var(--accent-color);
|
||||
top: -100px;
|
||||
right: -100px;
|
||||
}
|
||||
|
||||
.blob-2 {
|
||||
width: 300px;
|
||||
height: 300px;
|
||||
background: #f43f5e;
|
||||
bottom: -50px;
|
||||
left: -50px;
|
||||
animation-delay: -5s;
|
||||
}
|
||||
|
||||
@keyframes move {
|
||||
from {
|
||||
transform: translate(0, 0);
|
||||
}
|
||||
to {
|
||||
transform: translate(100px, 100px);
|
||||
}
|
||||
}
|
||||
|
||||
/* Dashboard Layout */
|
||||
.dashboard {
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 2rem;
|
||||
}
|
||||
|
||||
header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 3rem;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 2.5rem;
|
||||
font-weight: 800;
|
||||
letter-spacing: -1px;
|
||||
}
|
||||
|
||||
.accent-text {
|
||||
color: var(--accent-color);
|
||||
text-shadow: var(--neon-shadow);
|
||||
}
|
||||
|
||||
.status-badge {
|
||||
background: rgba(34, 197, 94, 0.2);
|
||||
color: #4ade80;
|
||||
padding: 0.5rem 1rem;
|
||||
border-radius: 20px;
|
||||
font-size: 0.8rem;
|
||||
font-weight: 600;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
border: 1px solid rgba(34, 197, 94, 0.3);
|
||||
}
|
||||
|
||||
.status-badge::before {
|
||||
content: "";
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
background: #22c55e;
|
||||
border-radius: 50%;
|
||||
box-shadow: 0 0 10px #22c55e;
|
||||
animation: pulse 2s infinite;
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
0% {
|
||||
opacity: 1;
|
||||
}
|
||||
50% {
|
||||
opacity: 0.4;
|
||||
}
|
||||
100% {
|
||||
opacity: 1;
|
||||
}
|
||||
}
|
||||
|
||||
.main-grid {
|
||||
display: grid;
|
||||
grid-template-columns: 2fr 1fr;
|
||||
gap: 2rem;
|
||||
}
|
||||
|
||||
@media (max-width: 900px) {
|
||||
.main-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
/* Card Styling */
|
||||
.card {
|
||||
background: var(--card-bg);
|
||||
backdrop-filter: var(--glass-blur);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 24px;
|
||||
padding: 1.5rem;
|
||||
transition:
|
||||
transform 0.3s ease,
|
||||
border-color 0.3s ease;
|
||||
}
|
||||
|
||||
.card:hover {
|
||||
border-color: rgba(255, 255, 255, 0.2);
|
||||
}
|
||||
|
||||
.card-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
.card-header h2 {
|
||||
font-size: 1.25rem;
|
||||
font-weight: 600;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.timestamp,
|
||||
.room-label {
|
||||
font-size: 0.85rem;
|
||||
background: rgba(255, 255, 255, 0.05);
|
||||
padding: 4px 12px;
|
||||
border-radius: 8px;
|
||||
}
|
||||
|
||||
/* Image styling */
|
||||
.image-container {
|
||||
width: 100%;
|
||||
aspect-ratio: 16/9;
|
||||
background: #000;
|
||||
border-radius: 16px;
|
||||
overflow: hidden;
|
||||
position: relative;
|
||||
border: 1px solid rgba(255, 255, 255, 0.05);
|
||||
}
|
||||
|
||||
#latest-image {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
object-fit: cover;
|
||||
transition: opacity 0.5s ease;
|
||||
}
|
||||
|
||||
.fallback-img {
|
||||
opacity: 0.3;
|
||||
}
|
||||
|
||||
/* Analysis Styling */
|
||||
.analysis-content {
|
||||
background: rgba(255, 255, 255, 0.02);
|
||||
border-radius: 16px;
|
||||
padding: 1.5rem;
|
||||
margin-bottom: 2rem;
|
||||
min-height: 120px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
border-left: 4px solid var(--accent-color);
|
||||
}
|
||||
|
||||
#analysis-text {
|
||||
font-size: 1.1rem;
|
||||
line-height: 1.6;
|
||||
font-style: italic;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.system-stats {
|
||||
border-top: 1px solid var(--border-color);
|
||||
padding-top: 1.5rem;
|
||||
}
|
||||
|
||||
.stat {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.stat .label {
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
.stat .value {
|
||||
font-weight: 600;
|
||||
}
|
||||
.stat .value.green {
|
||||
color: #4ade80;
|
||||
}
|
||||
+11
@@ -0,0 +1,11 @@
|
||||
import requests
|
||||
|
||||
url = "http://localhost:8000/analyze/pi"
|
||||
image_path = "/Users/matthias/.gemini/antigravity/brain/8baa96e5-b978-4054-8e90-3451bd7a8b70/messy_room_with_mug_1769719323032.png"
|
||||
|
||||
with open(image_path, "rb") as f:
|
||||
files = {"file": f}
|
||||
response = requests.post(url, files=files)
|
||||
|
||||
print(f"Status Code: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
@@ -0,0 +1,6 @@
|
||||
import requests
|
||||
try:
|
||||
r = requests.get("http://localhost:8000/api/info")
|
||||
print(r.json())
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
@@ -0,0 +1,675 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
||||
|
||||
@@ -0,0 +1,247 @@
|
||||
<#
|
||||
.Synopsis
|
||||
Activate a Python virtual environment for the current PowerShell session.
|
||||
|
||||
.Description
|
||||
Pushes the python executable for a virtual environment to the front of the
|
||||
$Env:PATH environment variable and sets the prompt to signify that you are
|
||||
in a Python virtual environment. Makes use of the command line switches as
|
||||
well as the `pyvenv.cfg` file values present in the virtual environment.
|
||||
|
||||
.Parameter VenvDir
|
||||
Path to the directory that contains the virtual environment to activate. The
|
||||
default value for this is the parent of the directory that the Activate.ps1
|
||||
script is located within.
|
||||
|
||||
.Parameter Prompt
|
||||
The prompt prefix to display when this virtual environment is activated. By
|
||||
default, this prompt is the name of the virtual environment folder (VenvDir)
|
||||
surrounded by parentheses and followed by a single space (ie. '(.venv) ').
|
||||
|
||||
.Example
|
||||
Activate.ps1
|
||||
Activates the Python virtual environment that contains the Activate.ps1 script.
|
||||
|
||||
.Example
|
||||
Activate.ps1 -Verbose
|
||||
Activates the Python virtual environment that contains the Activate.ps1 script,
|
||||
and shows extra information about the activation as it executes.
|
||||
|
||||
.Example
|
||||
Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv
|
||||
Activates the Python virtual environment located in the specified location.
|
||||
|
||||
.Example
|
||||
Activate.ps1 -Prompt "MyPython"
|
||||
Activates the Python virtual environment that contains the Activate.ps1 script,
|
||||
and prefixes the current prompt with the specified string (surrounded in
|
||||
parentheses) while the virtual environment is active.
|
||||
|
||||
.Notes
|
||||
On Windows, it may be required to enable this Activate.ps1 script by setting the
|
||||
execution policy for the user. You can do this by issuing the following PowerShell
|
||||
command:
|
||||
|
||||
PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
||||
|
||||
For more information on Execution Policies:
|
||||
https://go.microsoft.com/fwlink/?LinkID=135170
|
||||
|
||||
#>
|
||||
Param(
|
||||
[Parameter(Mandatory = $false)]
|
||||
[String]
|
||||
$VenvDir,
|
||||
[Parameter(Mandatory = $false)]
|
||||
[String]
|
||||
$Prompt
|
||||
)
|
||||
|
||||
<# Function declarations --------------------------------------------------- #>
|
||||
|
||||
<#
|
||||
.Synopsis
|
||||
Remove all shell session elements added by the Activate script, including the
|
||||
addition of the virtual environment's Python executable from the beginning of
|
||||
the PATH variable.
|
||||
|
||||
.Parameter NonDestructive
|
||||
If present, do not remove this function from the global namespace for the
|
||||
session.
|
||||
|
||||
#>
|
||||
function global:deactivate ([switch]$NonDestructive) {
|
||||
# Revert to original values
|
||||
|
||||
# The prior prompt:
|
||||
if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) {
|
||||
Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt
|
||||
Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT
|
||||
}
|
||||
|
||||
# The prior PYTHONHOME:
|
||||
if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) {
|
||||
Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME
|
||||
Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME
|
||||
}
|
||||
|
||||
# The prior PATH:
|
||||
if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) {
|
||||
Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH
|
||||
Remove-Item -Path Env:_OLD_VIRTUAL_PATH
|
||||
}
|
||||
|
||||
# Just remove the VIRTUAL_ENV altogether:
|
||||
if (Test-Path -Path Env:VIRTUAL_ENV) {
|
||||
Remove-Item -Path env:VIRTUAL_ENV
|
||||
}
|
||||
|
||||
# Just remove VIRTUAL_ENV_PROMPT altogether.
|
||||
if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) {
|
||||
Remove-Item -Path env:VIRTUAL_ENV_PROMPT
|
||||
}
|
||||
|
||||
# Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether:
|
||||
if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) {
|
||||
Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force
|
||||
}
|
||||
|
||||
# Leave deactivate function in the global namespace if requested:
|
||||
if (-not $NonDestructive) {
|
||||
Remove-Item -Path function:deactivate
|
||||
}
|
||||
}
|
||||
|
||||
<#
|
||||
.Description
|
||||
Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the
|
||||
given folder, and returns them in a map.
|
||||
|
||||
For each line in the pyvenv.cfg file, if that line can be parsed into exactly
|
||||
two strings separated by `=` (with any amount of whitespace surrounding the =)
|
||||
then it is considered a `key = value` line. The left hand string is the key,
|
||||
the right hand is the value.
|
||||
|
||||
If the value starts with a `'` or a `"` then the first and last character is
|
||||
stripped from the value before being captured.
|
||||
|
||||
.Parameter ConfigDir
|
||||
Path to the directory that contains the `pyvenv.cfg` file.
|
||||
#>
|
||||
function Get-PyVenvConfig(
|
||||
[String]
|
||||
$ConfigDir
|
||||
) {
|
||||
Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg"
|
||||
|
||||
# Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue).
|
||||
$pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue
|
||||
|
||||
# An empty map will be returned if no config file is found.
|
||||
$pyvenvConfig = @{ }
|
||||
|
||||
if ($pyvenvConfigPath) {
|
||||
|
||||
Write-Verbose "File exists, parse `key = value` lines"
|
||||
$pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath
|
||||
|
||||
$pyvenvConfigContent | ForEach-Object {
|
||||
$keyval = $PSItem -split "\s*=\s*", 2
|
||||
if ($keyval[0] -and $keyval[1]) {
|
||||
$val = $keyval[1]
|
||||
|
||||
# Remove extraneous quotations around a string value.
|
||||
if ("'""".Contains($val.Substring(0, 1))) {
|
||||
$val = $val.Substring(1, $val.Length - 2)
|
||||
}
|
||||
|
||||
$pyvenvConfig[$keyval[0]] = $val
|
||||
Write-Verbose "Adding Key: '$($keyval[0])'='$val'"
|
||||
}
|
||||
}
|
||||
}
|
||||
return $pyvenvConfig
|
||||
}
|
||||
|
||||
|
||||
<# Begin Activate script --------------------------------------------------- #>
|
||||
|
||||
# Determine the containing directory of this script
|
||||
$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
||||
$VenvExecDir = Get-Item -Path $VenvExecPath
|
||||
|
||||
Write-Verbose "Activation script is located in path: '$VenvExecPath'"
|
||||
Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)"
|
||||
Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)"
|
||||
|
||||
# Set values required in priority: CmdLine, ConfigFile, Default
|
||||
# First, get the location of the virtual environment, it might not be
|
||||
# VenvExecDir if specified on the command line.
|
||||
if ($VenvDir) {
|
||||
Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values"
|
||||
}
|
||||
else {
|
||||
Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir."
|
||||
$VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/")
|
||||
Write-Verbose "VenvDir=$VenvDir"
|
||||
}
|
||||
|
||||
# Next, read the `pyvenv.cfg` file to determine any required value such
|
||||
# as `prompt`.
|
||||
$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir
|
||||
|
||||
# Next, set the prompt from the command line, or the config file, or
|
||||
# just use the name of the virtual environment folder.
|
||||
if ($Prompt) {
|
||||
Write-Verbose "Prompt specified as argument, using '$Prompt'"
|
||||
}
|
||||
else {
|
||||
Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value"
|
||||
if ($pyvenvCfg -and $pyvenvCfg['prompt']) {
|
||||
Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'"
|
||||
$Prompt = $pyvenvCfg['prompt'];
|
||||
}
|
||||
else {
|
||||
Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)"
|
||||
Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'"
|
||||
$Prompt = Split-Path -Path $venvDir -Leaf
|
||||
}
|
||||
}
|
||||
|
||||
Write-Verbose "Prompt = '$Prompt'"
|
||||
Write-Verbose "VenvDir='$VenvDir'"
|
||||
|
||||
# Deactivate any currently active virtual environment, but leave the
|
||||
# deactivate function in place.
|
||||
deactivate -nondestructive
|
||||
|
||||
# Now set the environment variable VIRTUAL_ENV, used by many tools to determine
|
||||
# that there is an activated venv.
|
||||
$env:VIRTUAL_ENV = $VenvDir
|
||||
|
||||
if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) {
|
||||
|
||||
Write-Verbose "Setting prompt to '$Prompt'"
|
||||
|
||||
# Set the prompt to include the env name
|
||||
# Make sure _OLD_VIRTUAL_PROMPT is global
|
||||
function global:_OLD_VIRTUAL_PROMPT { "" }
|
||||
Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT
|
||||
New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt
|
||||
|
||||
function global:prompt {
|
||||
Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) "
|
||||
_OLD_VIRTUAL_PROMPT
|
||||
}
|
||||
$env:VIRTUAL_ENV_PROMPT = $Prompt
|
||||
}
|
||||
|
||||
# Clear PYTHONHOME
|
||||
if (Test-Path -Path Env:PYTHONHOME) {
|
||||
Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME
|
||||
Remove-Item -Path Env:PYTHONHOME
|
||||
}
|
||||
|
||||
# Add the venv to the PATH
|
||||
Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH
|
||||
$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH"
|
||||
@@ -0,0 +1,76 @@
|
||||
# This file must be used with "source bin/activate" *from bash*
|
||||
# You cannot run it directly
|
||||
|
||||
deactivate () {
|
||||
# reset old environment variables
|
||||
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
|
||||
PATH="${_OLD_VIRTUAL_PATH:-}"
|
||||
export PATH
|
||||
unset _OLD_VIRTUAL_PATH
|
||||
fi
|
||||
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
|
||||
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
|
||||
export PYTHONHOME
|
||||
unset _OLD_VIRTUAL_PYTHONHOME
|
||||
fi
|
||||
|
||||
# Call hash to forget past locations. Without forgetting
|
||||
# past locations the $PATH changes we made may not be respected.
|
||||
# See "man bash" for more details. hash is usually a builtin of your shell
|
||||
hash -r 2> /dev/null
|
||||
|
||||
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
|
||||
PS1="${_OLD_VIRTUAL_PS1:-}"
|
||||
export PS1
|
||||
unset _OLD_VIRTUAL_PS1
|
||||
fi
|
||||
|
||||
unset VIRTUAL_ENV
|
||||
unset VIRTUAL_ENV_PROMPT
|
||||
if [ ! "${1:-}" = "nondestructive" ] ; then
|
||||
# Self destruct!
|
||||
unset -f deactivate
|
||||
fi
|
||||
}
|
||||
|
||||
# unset irrelevant variables
|
||||
deactivate nondestructive
|
||||
|
||||
# on Windows, a path can contain colons and backslashes and has to be converted:
|
||||
case "$(uname)" in
|
||||
CYGWIN*|MSYS*|MINGW*)
|
||||
# transform D:\path\to\venv to /d/path/to/venv on MSYS and MINGW
|
||||
# and to /cygdrive/d/path/to/venv on Cygwin
|
||||
VIRTUAL_ENV=$(cygpath /Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper)
|
||||
export VIRTUAL_ENV
|
||||
;;
|
||||
*)
|
||||
# use the path as-is
|
||||
export VIRTUAL_ENV=/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper
|
||||
;;
|
||||
esac
|
||||
|
||||
_OLD_VIRTUAL_PATH="$PATH"
|
||||
PATH="$VIRTUAL_ENV/"bin":$PATH"
|
||||
export PATH
|
||||
|
||||
VIRTUAL_ENV_PROMPT='(venv_piper) '
|
||||
export VIRTUAL_ENV_PROMPT
|
||||
|
||||
# unset PYTHONHOME if set
|
||||
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
|
||||
# could use `if (set -u; : $PYTHONHOME) ;` in bash
|
||||
if [ -n "${PYTHONHOME:-}" ] ; then
|
||||
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
|
||||
unset PYTHONHOME
|
||||
fi
|
||||
|
||||
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
|
||||
_OLD_VIRTUAL_PS1="${PS1:-}"
|
||||
PS1="("'(venv_piper) '") ${PS1:-}"
|
||||
export PS1
|
||||
fi
|
||||
|
||||
# Call hash to forget past commands. Without forgetting
|
||||
# past commands the $PATH changes we made may not be respected
|
||||
hash -r 2> /dev/null
|
||||
@@ -0,0 +1,27 @@
|
||||
# This file must be used with "source bin/activate.csh" *from csh*.
|
||||
# You cannot run it directly.
|
||||
|
||||
# Created by Davide Di Blasi <davidedb@gmail.com>.
|
||||
# Ported to Python 3.3 venv by Andrew Svetlov <andrew.svetlov@gmail.com>
|
||||
|
||||
alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate'
|
||||
|
||||
# Unset irrelevant variables.
|
||||
deactivate nondestructive
|
||||
|
||||
setenv VIRTUAL_ENV /Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper
|
||||
|
||||
set _OLD_VIRTUAL_PATH="$PATH"
|
||||
setenv PATH "$VIRTUAL_ENV/"bin":$PATH"
|
||||
|
||||
|
||||
set _OLD_VIRTUAL_PROMPT="$prompt"
|
||||
|
||||
if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then
|
||||
set prompt = '(venv_piper) '"$prompt"
|
||||
setenv VIRTUAL_ENV_PROMPT '(venv_piper) '
|
||||
endif
|
||||
|
||||
alias pydoc python -m pydoc
|
||||
|
||||
rehash
|
||||
@@ -0,0 +1,69 @@
|
||||
# This file must be used with "source <venv>/bin/activate.fish" *from fish*
|
||||
# (https://fishshell.com/). You cannot run it directly.
|
||||
|
||||
function deactivate -d "Exit virtual environment and return to normal shell environment"
|
||||
# reset old environment variables
|
||||
if test -n "$_OLD_VIRTUAL_PATH"
|
||||
set -gx PATH $_OLD_VIRTUAL_PATH
|
||||
set -e _OLD_VIRTUAL_PATH
|
||||
end
|
||||
if test -n "$_OLD_VIRTUAL_PYTHONHOME"
|
||||
set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME
|
||||
set -e _OLD_VIRTUAL_PYTHONHOME
|
||||
end
|
||||
|
||||
if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
|
||||
set -e _OLD_FISH_PROMPT_OVERRIDE
|
||||
# prevents error when using nested fish instances (Issue #93858)
|
||||
if functions -q _old_fish_prompt
|
||||
functions -e fish_prompt
|
||||
functions -c _old_fish_prompt fish_prompt
|
||||
functions -e _old_fish_prompt
|
||||
end
|
||||
end
|
||||
|
||||
set -e VIRTUAL_ENV
|
||||
set -e VIRTUAL_ENV_PROMPT
|
||||
if test "$argv[1]" != "nondestructive"
|
||||
# Self-destruct!
|
||||
functions -e deactivate
|
||||
end
|
||||
end
|
||||
|
||||
# Unset irrelevant variables.
|
||||
deactivate nondestructive
|
||||
|
||||
set -gx VIRTUAL_ENV /Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper
|
||||
|
||||
set -gx _OLD_VIRTUAL_PATH $PATH
|
||||
set -gx PATH "$VIRTUAL_ENV/"bin $PATH
|
||||
|
||||
# Unset PYTHONHOME if set.
|
||||
if set -q PYTHONHOME
|
||||
set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME
|
||||
set -e PYTHONHOME
|
||||
end
|
||||
|
||||
if test -z "$VIRTUAL_ENV_DISABLE_PROMPT"
|
||||
# fish uses a function instead of an env var to generate the prompt.
|
||||
|
||||
# Save the current fish_prompt function as the function _old_fish_prompt.
|
||||
functions -c fish_prompt _old_fish_prompt
|
||||
|
||||
# With the original prompt function renamed, we can override with our own.
|
||||
function fish_prompt
|
||||
# Save the return status of the last command.
|
||||
set -l old_status $status
|
||||
|
||||
# Output the venv prompt; color taken from the blue of the Python logo.
|
||||
printf "%s%s%s" (set_color 4B8BBE) '(venv_piper) ' (set_color normal)
|
||||
|
||||
# Restore the return status of the previous command.
|
||||
echo "exit $old_status" | .
|
||||
# Output the original/"old" prompt.
|
||||
_old_fish_prompt
|
||||
end
|
||||
|
||||
set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV"
|
||||
set -gx VIRTUAL_ENV_PROMPT '(venv_piper) '
|
||||
end
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from coloredlogs.cli import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from numpy.f2py.f2py2e import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from humanfriendly.cli import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from isympy import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from numpy._configtool import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from onnxruntime.tools.onnxruntime_test import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python3.12
|
||||
import sys
|
||||
from pip._internal.cli.main import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python3.12
|
||||
import sys
|
||||
from pip._internal.cli.main import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python3.12
|
||||
import sys
|
||||
from pip._internal.cli.main import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Executable
+7
@@ -0,0 +1,7 @@
|
||||
#!/Users/matthias/Documents/Projects/Homelab/Home-Bro/home-bro-brain/venv_piper/bin/python
|
||||
import sys
|
||||
from piper.__main__ import main
|
||||
if __name__ == '__main__':
|
||||
if sys.argv[0].endswith('.exe'):
|
||||
sys.argv[0] = sys.argv[0][:-4]
|
||||
sys.exit(main())
|
||||
Symlink
+1
@@ -0,0 +1 @@
|
||||
python3.12
|
||||
Symlink
+1
@@ -0,0 +1 @@
|
||||
python3.12
|
||||
Symlink
+1
@@ -0,0 +1 @@
|
||||
/opt/homebrew/opt/python@3.12/bin/python3.12
|
||||
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,20 @@
|
||||
Copyright (c) 2020 Peter Odding
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@@ -0,0 +1,291 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: coloredlogs
|
||||
Version: 15.0.1
|
||||
Summary: Colored terminal output for Python's logging module
|
||||
Home-page: https://coloredlogs.readthedocs.io
|
||||
Author: Peter Odding
|
||||
Author-email: peter@peterodding.com
|
||||
License: MIT
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Environment :: Console
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Intended Audience :: Information Technology
|
||||
Classifier: Intended Audience :: System Administrators
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Operating System :: MacOS
|
||||
Classifier: Operating System :: Microsoft :: Windows
|
||||
Classifier: Operating System :: POSIX
|
||||
Classifier: Operating System :: Unix
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
||||
Classifier: Topic :: Communications
|
||||
Classifier: Topic :: Scientific/Engineering :: Human Machine Interfaces
|
||||
Classifier: Topic :: Software Development
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Software Development :: User Interfaces
|
||||
Classifier: Topic :: System
|
||||
Classifier: Topic :: System :: Shells
|
||||
Classifier: Topic :: System :: System Shells
|
||||
Classifier: Topic :: System :: Console Fonts
|
||||
Classifier: Topic :: System :: Logging
|
||||
Classifier: Topic :: System :: Systems Administration
|
||||
Classifier: Topic :: Terminals
|
||||
Classifier: Topic :: Utilities
|
||||
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
|
||||
Requires-Dist: humanfriendly (>=9.1)
|
||||
Provides-Extra: cron
|
||||
Requires-Dist: capturer (>=2.4) ; extra == 'cron'
|
||||
|
||||
coloredlogs: Colored terminal output for Python's logging module
|
||||
================================================================
|
||||
|
||||
.. image:: https://travis-ci.org/xolox/python-coloredlogs.svg?branch=master
|
||||
:target: https://travis-ci.org/xolox/python-coloredlogs
|
||||
|
||||
.. image:: https://coveralls.io/repos/github/xolox/python-coloredlogs/badge.svg?branch=master
|
||||
:target: https://coveralls.io/github/xolox/python-coloredlogs?branch=master
|
||||
|
||||
The `coloredlogs` package enables colored terminal output for Python's logging_
|
||||
module. The ColoredFormatter_ class inherits from `logging.Formatter`_ and uses
|
||||
`ANSI escape sequences`_ to render your logging messages in color. It uses only
|
||||
standard colors so it should work on any UNIX terminal. It's currently tested
|
||||
on Python 2.7, 3.5+ and PyPy (2 and 3). On Windows `coloredlogs` automatically
|
||||
tries to enable native ANSI support (on up-to-date Windows 10 installations)
|
||||
and falls back on using colorama_ (if installed). Here is a screen shot of the
|
||||
demo that is printed when the command ``coloredlogs --demo`` is executed:
|
||||
|
||||
.. image:: https://coloredlogs.readthedocs.io/en/latest/_images/defaults.png
|
||||
|
||||
Note that the screenshot above includes custom logging levels defined by my
|
||||
verboselogs_ package: if you install both `coloredlogs` and `verboselogs` it
|
||||
will Just Work (`verboselogs` is of course not required to use
|
||||
`coloredlogs`).
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
The `coloredlogs` package is available on PyPI_ which means installation should
|
||||
be as simple as:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install coloredlogs
|
||||
|
||||
There's actually a multitude of ways to install Python packages (e.g. the `per
|
||||
user site-packages directory`_, `virtual environments`_ or just installing
|
||||
system wide) and I have no intention of getting into that discussion here, so
|
||||
if this intimidates you then read up on your options before returning to these
|
||||
instructions 😉.
|
||||
|
||||
Optional dependencies
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Native ANSI support on Windows requires an up-to-date Windows 10 installation.
|
||||
If this is not working for you then consider installing the colorama_ package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install colorama
|
||||
|
||||
Once colorama_ is installed it will be used automatically.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Here's an example of how easy it is to get started:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import coloredlogs, logging
|
||||
|
||||
# Create a logger object.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# By default the install() function installs a handler on the root logger,
|
||||
# this means that log messages from your code and log messages from the
|
||||
# libraries that you use will all show up on the terminal.
|
||||
coloredlogs.install(level='DEBUG')
|
||||
|
||||
# If you don't want to see log messages from libraries, you can pass a
|
||||
# specific logger object to the install() function. In this case only log
|
||||
# messages originating from that logger will show up on the terminal.
|
||||
coloredlogs.install(level='DEBUG', logger=logger)
|
||||
|
||||
# Some examples.
|
||||
logger.debug("this is a debugging message")
|
||||
logger.info("this is an informational message")
|
||||
logger.warning("this is a warning message")
|
||||
logger.error("this is an error message")
|
||||
logger.critical("this is a critical message")
|
||||
|
||||
Format of log messages
|
||||
----------------------
|
||||
|
||||
The ColoredFormatter_ class supports user defined log formats so you can use
|
||||
any log format you like. The default log format is as follows::
|
||||
|
||||
%(asctime)s %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s
|
||||
|
||||
This log format results in the following output::
|
||||
|
||||
2015-10-23 03:32:22 peter-macbook coloredlogs.demo[30462] DEBUG message with level 'debug'
|
||||
2015-10-23 03:32:23 peter-macbook coloredlogs.demo[30462] VERBOSE message with level 'verbose'
|
||||
2015-10-23 03:32:24 peter-macbook coloredlogs.demo[30462] INFO message with level 'info'
|
||||
...
|
||||
|
||||
You can customize the log format and styling using environment variables as
|
||||
well as programmatically, please refer to the `online documentation`_ for
|
||||
details.
|
||||
|
||||
Enabling millisecond precision
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you're switching from `logging.basicConfig()`_ to `coloredlogs.install()`_
|
||||
you may notice that timestamps no longer include milliseconds. This is because
|
||||
coloredlogs doesn't output milliseconds in timestamps unless you explicitly
|
||||
tell it to. There are three ways to do that:
|
||||
|
||||
1. The easy way is to pass the `milliseconds` argument to `coloredlogs.install()`_::
|
||||
|
||||
coloredlogs.install(milliseconds=True)
|
||||
|
||||
This became supported in `release 7.1`_ (due to `#16`_).
|
||||
|
||||
2. Alternatively you can change the log format `to include 'msecs'`_::
|
||||
|
||||
%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s
|
||||
|
||||
Here's what the call to `coloredlogs.install()`_ would then look like::
|
||||
|
||||
coloredlogs.install(fmt='%(asctime)s,%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s')
|
||||
|
||||
Customizing the log format also enables you to change the delimiter that
|
||||
separates seconds from milliseconds (the comma above). This became possible
|
||||
in `release 3.0`_ which added support for user defined log formats.
|
||||
|
||||
3. If the use of ``%(msecs)d`` isn't flexible enough you can instead add ``%f``
|
||||
to the date/time format, it will be replaced by the value of ``%(msecs)03d``.
|
||||
Support for the ``%f`` directive was added to `release 9.3`_ (due to `#45`_).
|
||||
|
||||
Custom logging fields
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following custom log format fields are supported:
|
||||
|
||||
- ``%(hostname)s`` provides the hostname of the local system.
|
||||
- ``%(programname)s`` provides the name of the currently running program.
|
||||
- ``%(username)s`` provides the username of the currently logged in user.
|
||||
|
||||
When `coloredlogs.install()`_ detects that any of these fields are used in the
|
||||
format string the applicable logging.Filter_ subclasses are automatically
|
||||
registered to populate the relevant log record fields.
|
||||
|
||||
Changing text styles and colors
|
||||
-------------------------------
|
||||
|
||||
The online documentation contains `an example of customizing the text styles and
|
||||
colors <https://coloredlogs.readthedocs.io/en/latest/api.html#changing-the-colors-styles>`_.
|
||||
|
||||
Colored output from cron
|
||||
------------------------
|
||||
|
||||
When `coloredlogs` is used in a cron_ job, the output that's e-mailed to you by
|
||||
cron won't contain any ANSI escape sequences because `coloredlogs` realizes
|
||||
that it's not attached to an interactive terminal. If you'd like to have colors
|
||||
e-mailed to you by cron there are two ways to make it happen:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Modifying your crontab
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Here's an example of a minimal crontab::
|
||||
|
||||
MAILTO="your-email-address@here"
|
||||
CONTENT_TYPE="text/html"
|
||||
* * * * * root coloredlogs --to-html your-command
|
||||
|
||||
The ``coloredlogs`` program is installed when you install the `coloredlogs`
|
||||
Python package. When you execute ``coloredlogs --to-html your-command`` it runs
|
||||
``your-command`` under the external program ``script`` (you need to have this
|
||||
installed). This makes ``your-command`` think that it's attached to an
|
||||
interactive terminal which means it will output ANSI escape sequences which
|
||||
will then be converted to HTML by the ``coloredlogs`` program. Yes, this is a
|
||||
bit convoluted, but it works great :-)
|
||||
|
||||
Modifying your Python code
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ColoredCronMailer_ class provides a context manager that automatically
|
||||
enables HTML output when the ``$CONTENT_TYPE`` variable has been correctly set
|
||||
in the crontab.
|
||||
|
||||
This requires my capturer_ package which you can install using ``pip install
|
||||
'coloredlogs[cron]'``. The ``[cron]`` extra will pull in capturer_ 2.4 or newer
|
||||
which is required to capture the output while silencing it - otherwise you'd
|
||||
get duplicate output in the emails sent by ``cron``.
|
||||
|
||||
The context manager can also be used to retroactively silence output that has
|
||||
already been produced, this can be useful to avoid spammy cron jobs that have
|
||||
nothing useful to do but still email their output to the system administrator
|
||||
every few minutes :-).
|
||||
|
||||
Contact
|
||||
-------
|
||||
|
||||
The latest version of `coloredlogs` is available on PyPI_ and GitHub_. The
|
||||
`online documentation`_ is available on Read The Docs and includes a
|
||||
changelog_. For bug reports please create an issue on GitHub_. If you have
|
||||
questions, suggestions, etc. feel free to send me an e-mail at
|
||||
`peter@peterodding.com`_.
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This software is licensed under the `MIT license`_.
|
||||
|
||||
© 2020 Peter Odding.
|
||||
|
||||
|
||||
.. External references:
|
||||
.. _#16: https://github.com/xolox/python-coloredlogs/issues/16
|
||||
.. _#45: https://github.com/xolox/python-coloredlogs/issues/45
|
||||
.. _ANSI escape sequences: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
|
||||
.. _capturer: https://pypi.org/project/capturer
|
||||
.. _changelog: https://coloredlogs.readthedocs.org/en/latest/changelog.html
|
||||
.. _colorama: https://pypi.org/project/colorama
|
||||
.. _ColoredCronMailer: https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.converter.ColoredCronMailer
|
||||
.. _ColoredFormatter: https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.ColoredFormatter
|
||||
.. _coloredlogs.install(): https://coloredlogs.readthedocs.io/en/latest/api.html#coloredlogs.install
|
||||
.. _cron: https://en.wikipedia.org/wiki/Cron
|
||||
.. _GitHub: https://github.com/xolox/python-coloredlogs
|
||||
.. _logging.basicConfig(): https://docs.python.org/2/library/logging.html#logging.basicConfig
|
||||
.. _logging.Filter: https://docs.python.org/3/library/logging.html#filter-objects
|
||||
.. _logging.Formatter: https://docs.python.org/2/library/logging.html#logging.Formatter
|
||||
.. _logging: https://docs.python.org/2/library/logging.html
|
||||
.. _MIT license: https://en.wikipedia.org/wiki/MIT_License
|
||||
.. _online documentation: https://coloredlogs.readthedocs.io/
|
||||
.. _per user site-packages directory: https://www.python.org/dev/peps/pep-0370/
|
||||
.. _peter@peterodding.com: peter@peterodding.com
|
||||
.. _PyPI: https://pypi.org/project/coloredlogs
|
||||
.. _release 3.0: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-3-0-2015-10-23
|
||||
.. _release 7.1: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-7-1-2017-07-15
|
||||
.. _release 9.3: https://coloredlogs.readthedocs.io/en/latest/changelog.html#release-9-3-2018-04-29
|
||||
.. _to include 'msecs': https://stackoverflow.com/questions/6290739/python-logging-use-milliseconds-in-time-format
|
||||
.. _verboselogs: https://pypi.org/project/verboselogs
|
||||
.. _virtual environments: http://docs.python-guide.org/en/latest/dev/virtualenvs/
|
||||
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
../../../bin/coloredlogs,sha256=q-s7Du259fyzdKUNCF3nvGID0d2Ikl-6lvZd83I8vLY,259
|
||||
coloredlogs-15.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
coloredlogs-15.0.1.dist-info/LICENSE.txt,sha256=C-9P-xhVtlVwME8gGMbUTkurl71d-HFuzXgAAU1xcmc,1056
|
||||
coloredlogs-15.0.1.dist-info/METADATA,sha256=FmO6unRvNe77JJ2UU0XYhWbMTzZDhj6zGpEljDysZ0w,12387
|
||||
coloredlogs-15.0.1.dist-info/RECORD,,
|
||||
coloredlogs-15.0.1.dist-info/WHEEL,sha256=kGT74LWyRUZrL4VgLh6_g12IeVl_9u9ZVhadrgXZUEY,110
|
||||
coloredlogs-15.0.1.dist-info/entry_points.txt,sha256=uOmYMPjy7X6effmVh56i3U3N9S13DTRozJOAncBtZG0,54
|
||||
coloredlogs-15.0.1.dist-info/top_level.txt,sha256=D3LuRvusEQ8xKUhPowPUmPWNm88FSw8ts3x2ulvCSyQ,12
|
||||
coloredlogs.pth,sha256=3ag6hVmG76XNh_AkiwGZwAhusOjn_s59Z0GVnFzjlTY,147
|
||||
coloredlogs/__init__.py,sha256=FxaiI1pQ0JzRYbx2K5f_Ar9Wa1meZLg5XKGWHKgPBNo,64423
|
||||
coloredlogs/__pycache__/__init__.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/cli.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/demo.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/syslog.cpython-312.pyc,,
|
||||
coloredlogs/__pycache__/tests.cpython-312.pyc,,
|
||||
coloredlogs/cli.py,sha256=iaHgUVeHPl5iIecKG1WJMslOJEKdhEusbxj7B4r1sHI,3493
|
||||
coloredlogs/converter/__init__.py,sha256=t08e9V8-Ed9y9eNSYv3x1DH0hXgaXHJmQWp54j_a2nk,18353
|
||||
coloredlogs/converter/__pycache__/__init__.cpython-312.pyc,,
|
||||
coloredlogs/converter/__pycache__/colors.cpython-312.pyc,,
|
||||
coloredlogs/converter/colors.py,sha256=1N2PpCa-EYMMWyC6Dw7SG2WX1gC67F9F5OteeFW52SM,5387
|
||||
coloredlogs/demo.py,sha256=CaPVGOB6rCtJDyeqk0VBLi0t05jgUEniuB__VYo5kho,1825
|
||||
coloredlogs/syslog.py,sha256=-XyRUzI20QKTTtVqhJAtfYc-ljw0crsjxqjWDjqrcqU,11849
|
||||
coloredlogs/tests.py,sha256=oe_pWnTGfNEI7spY3_VA62fX_09ZEVFVERvr9sBMqnM,30791
|
||||
@@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.34.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
[console_scripts]
|
||||
coloredlogs = coloredlogs.cli:main
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
coloredlogs
|
||||
@@ -0,0 +1 @@
|
||||
import os; exec('try: __import__("coloredlogs").auto_install() if os.environ.get("COLOREDLOGS_AUTO_INSTALL") else None\nexcept ImportError: pass')
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,106 @@
|
||||
# Command line interface for the coloredlogs package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: December 15, 2017
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""
|
||||
Usage: coloredlogs [OPTIONS] [ARGS]
|
||||
|
||||
The coloredlogs program provides a simple command line interface for the Python
|
||||
package by the same name.
|
||||
|
||||
Supported options:
|
||||
|
||||
-c, --convert, --to-html
|
||||
|
||||
Capture the output of an external command (given by the positional
|
||||
arguments) and convert ANSI escape sequences in the output to HTML.
|
||||
|
||||
If the `coloredlogs' program is attached to an interactive terminal it will
|
||||
write the generated HTML to a temporary file and open that file in a web
|
||||
browser, otherwise the generated HTML will be written to standard output.
|
||||
|
||||
This requires the `script' program to fake the external command into
|
||||
thinking that it's attached to an interactive terminal (in order to enable
|
||||
output of ANSI escape sequences).
|
||||
|
||||
If the command didn't produce any output then no HTML will be produced on
|
||||
standard output, this is to avoid empty emails from cron jobs.
|
||||
|
||||
-d, --demo
|
||||
|
||||
Perform a simple demonstration of the coloredlogs package to show the
|
||||
colored logging on an interactive terminal.
|
||||
|
||||
-h, --help
|
||||
|
||||
Show this message and exit.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import functools
|
||||
import getopt
|
||||
import logging
|
||||
import sys
|
||||
import tempfile
|
||||
import webbrowser
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.terminal import connected_to_terminal, output, usage, warning
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs.converter import capture, convert
|
||||
from coloredlogs.demo import demonstrate_colored_logging
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def main():
|
||||
"""Command line interface for the ``coloredlogs`` program."""
|
||||
actions = []
|
||||
try:
|
||||
# Parse the command line arguments.
|
||||
options, arguments = getopt.getopt(sys.argv[1:], 'cdh', [
|
||||
'convert', 'to-html', 'demo', 'help',
|
||||
])
|
||||
# Map command line options to actions.
|
||||
for option, value in options:
|
||||
if option in ('-c', '--convert', '--to-html'):
|
||||
actions.append(functools.partial(convert_command_output, *arguments))
|
||||
arguments = []
|
||||
elif option in ('-d', '--demo'):
|
||||
actions.append(demonstrate_colored_logging)
|
||||
elif option in ('-h', '--help'):
|
||||
usage(__doc__)
|
||||
return
|
||||
else:
|
||||
assert False, "Programming error: Unhandled option!"
|
||||
if not actions:
|
||||
usage(__doc__)
|
||||
return
|
||||
except Exception as e:
|
||||
warning("Error: %s", e)
|
||||
sys.exit(1)
|
||||
for function in actions:
|
||||
function()
|
||||
|
||||
|
||||
def convert_command_output(*command):
|
||||
"""
|
||||
Command line interface for ``coloredlogs --to-html``.
|
||||
|
||||
Takes a command (and its arguments) and runs the program under ``script``
|
||||
(emulating an interactive terminal), intercepts the output of the command
|
||||
and converts ANSI escape sequences in the output to HTML.
|
||||
"""
|
||||
captured_output = capture(command)
|
||||
converted_output = convert(captured_output)
|
||||
if connected_to_terminal():
|
||||
fd, temporary_file = tempfile.mkstemp(suffix='.html')
|
||||
with open(temporary_file, 'w') as handle:
|
||||
handle.write(converted_output)
|
||||
webbrowser.open(temporary_file)
|
||||
elif captured_output and not captured_output.isspace():
|
||||
output(converted_output)
|
||||
@@ -0,0 +1,403 @@
|
||||
# Program to convert text with ANSI escape sequences to HTML.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: February 14, 2020
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Convert text with ANSI escape sequences to HTML."""
|
||||
|
||||
# Standard library modules.
|
||||
import codecs
|
||||
import os
|
||||
import pipes
|
||||
import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.terminal import (
|
||||
ANSI_CSI,
|
||||
ANSI_TEXT_STYLES,
|
||||
clean_terminal_output,
|
||||
output,
|
||||
)
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs.converter.colors import (
|
||||
BRIGHT_COLOR_PALETTE,
|
||||
EIGHT_COLOR_PALETTE,
|
||||
EXTENDED_COLOR_PALETTE,
|
||||
)
|
||||
|
||||
# Compiled regular expression that matches leading spaces (indentation).
|
||||
INDENT_PATTERN = re.compile('^ +', re.MULTILINE)
|
||||
|
||||
# Compiled regular expression that matches a tag followed by a space at the start of a line.
|
||||
TAG_INDENT_PATTERN = re.compile('^(<[^>]+>) ', re.MULTILINE)
|
||||
|
||||
# Compiled regular expression that matches strings we want to convert. Used to
|
||||
# separate all special strings and literal output in a single pass (this allows
|
||||
# us to properly encode the output without resorting to nasty hacks).
|
||||
TOKEN_PATTERN = re.compile(r'''
|
||||
# Wrap the pattern in a capture group so that re.split() includes the
|
||||
# substrings that match the pattern in the resulting list of strings.
|
||||
(
|
||||
# Match URLs with supported schemes and domain names.
|
||||
(?: https?:// | www\\. )
|
||||
# Scan until the end of the URL by matching non-whitespace characters
|
||||
# that are also not escape characters.
|
||||
[^\s\x1b]+
|
||||
# Alternatively ...
|
||||
|
|
||||
# Match (what looks like) ANSI escape sequences.
|
||||
\x1b \[ .*? m
|
||||
)
|
||||
''', re.UNICODE | re.VERBOSE)
|
||||
|
||||
|
||||
def capture(command, encoding='UTF-8'):
|
||||
"""
|
||||
Capture the output of an external command as if it runs in an interactive terminal.
|
||||
|
||||
:param command: The command name and its arguments (a list of strings).
|
||||
:param encoding: The encoding to use to decode the output (a string).
|
||||
:returns: The output of the command.
|
||||
|
||||
This function runs an external command under ``script`` (emulating an
|
||||
interactive terminal) to capture the output of the command as if it was
|
||||
running in an interactive terminal (including ANSI escape sequences).
|
||||
"""
|
||||
with open(os.devnull, 'wb') as dev_null:
|
||||
# We start by invoking the `script' program in a form that is supported
|
||||
# by the Linux implementation [1] but fails command line validation on
|
||||
# the MacOS (BSD) implementation [2]: The command is specified using
|
||||
# the -c option and the typescript file is /dev/null.
|
||||
#
|
||||
# [1] http://man7.org/linux/man-pages/man1/script.1.html
|
||||
# [2] https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/script.1.html
|
||||
command_line = ['script', '-qc', ' '.join(map(pipes.quote, command)), '/dev/null']
|
||||
script = subprocess.Popen(command_line, stdout=subprocess.PIPE, stderr=dev_null)
|
||||
stdout, stderr = script.communicate()
|
||||
if script.returncode == 0:
|
||||
# If `script' succeeded we assume that it understood our command line
|
||||
# invocation which means it's the Linux implementation (in this case
|
||||
# we can use standard output instead of a temporary file).
|
||||
output = stdout.decode(encoding)
|
||||
else:
|
||||
# If `script' failed we assume that it didn't understand our command
|
||||
# line invocation which means it's the MacOS (BSD) implementation
|
||||
# (in this case we need a temporary file because the command line
|
||||
# interface requires it).
|
||||
fd, temporary_file = tempfile.mkstemp(prefix='coloredlogs-', suffix='-capture.txt')
|
||||
try:
|
||||
command_line = ['script', '-q', temporary_file] + list(command)
|
||||
subprocess.Popen(command_line, stdout=dev_null, stderr=dev_null).wait()
|
||||
with codecs.open(temporary_file, 'rb') as handle:
|
||||
output = handle.read()
|
||||
finally:
|
||||
os.unlink(temporary_file)
|
||||
# On MacOS when standard input is /dev/null I've observed
|
||||
# the captured output starting with the characters '^D':
|
||||
#
|
||||
# $ script -q capture.txt echo example </dev/null
|
||||
# example
|
||||
# $ xxd capture.txt
|
||||
# 00000000: 5e44 0808 6578 616d 706c 650d 0a ^D..example..
|
||||
#
|
||||
# I'm not sure why this is here, although I suppose it has to do
|
||||
# with ^D in caret notation signifying end-of-file [1]. What I do
|
||||
# know is that this is an implementation detail that callers of the
|
||||
# capture() function shouldn't be bothered with, so we strip it.
|
||||
#
|
||||
# [1] https://en.wikipedia.org/wiki/End-of-file
|
||||
if output.startswith(b'^D'):
|
||||
output = output[2:]
|
||||
output = output.decode(encoding)
|
||||
# Clean up backspace and carriage return characters and the 'erase line'
|
||||
# ANSI escape sequence and return the output as a Unicode string.
|
||||
return u'\n'.join(clean_terminal_output(output))
|
||||
|
||||
|
||||
def convert(text, code=True, tabsize=4):
|
||||
"""
|
||||
Convert text with ANSI escape sequences to HTML.
|
||||
|
||||
:param text: The text with ANSI escape sequences (a string).
|
||||
:param code: Whether to wrap the returned HTML fragment in a
|
||||
``<code>...</code>`` element (a boolean, defaults
|
||||
to :data:`True`).
|
||||
:param tabsize: Refer to :func:`str.expandtabs()` for details.
|
||||
:returns: The text converted to HTML (a string).
|
||||
"""
|
||||
output = []
|
||||
in_span = False
|
||||
compatible_text_styles = {
|
||||
# The following ANSI text styles have an obvious mapping to CSS.
|
||||
ANSI_TEXT_STYLES['bold']: {'font-weight': 'bold'},
|
||||
ANSI_TEXT_STYLES['strike_through']: {'text-decoration': 'line-through'},
|
||||
ANSI_TEXT_STYLES['underline']: {'text-decoration': 'underline'},
|
||||
}
|
||||
for token in TOKEN_PATTERN.split(text):
|
||||
if token.startswith(('http://', 'https://', 'www.')):
|
||||
url = token if '://' in token else ('http://' + token)
|
||||
token = u'<a href="%s" style="color:inherit">%s</a>' % (html_encode(url), html_encode(token))
|
||||
elif token.startswith(ANSI_CSI):
|
||||
ansi_codes = token[len(ANSI_CSI):-1].split(';')
|
||||
if all(c.isdigit() for c in ansi_codes):
|
||||
ansi_codes = list(map(int, ansi_codes))
|
||||
# First we check for a reset code to close the previous <span>
|
||||
# element. As explained on Wikipedia [1] an absence of codes
|
||||
# implies a reset code as well: "No parameters at all in ESC[m acts
|
||||
# like a 0 reset code".
|
||||
# [1] https://en.wikipedia.org/wiki/ANSI_escape_code#CSI_sequences
|
||||
if in_span and (0 in ansi_codes or not ansi_codes):
|
||||
output.append('</span>')
|
||||
in_span = False
|
||||
# Now we're ready to generate the next <span> element (if any) in
|
||||
# the knowledge that we're emitting opening <span> and closing
|
||||
# </span> tags in the correct order.
|
||||
styles = {}
|
||||
is_faint = (ANSI_TEXT_STYLES['faint'] in ansi_codes)
|
||||
is_inverse = (ANSI_TEXT_STYLES['inverse'] in ansi_codes)
|
||||
while ansi_codes:
|
||||
number = ansi_codes.pop(0)
|
||||
# Try to match a compatible text style.
|
||||
if number in compatible_text_styles:
|
||||
styles.update(compatible_text_styles[number])
|
||||
continue
|
||||
# Try to extract a text and/or background color.
|
||||
text_color = None
|
||||
background_color = None
|
||||
if 30 <= number <= 37:
|
||||
# 30-37 sets the text color from the eight color palette.
|
||||
text_color = EIGHT_COLOR_PALETTE[number - 30]
|
||||
elif 40 <= number <= 47:
|
||||
# 40-47 sets the background color from the eight color palette.
|
||||
background_color = EIGHT_COLOR_PALETTE[number - 40]
|
||||
elif 90 <= number <= 97:
|
||||
# 90-97 sets the text color from the high-intensity eight color palette.
|
||||
text_color = BRIGHT_COLOR_PALETTE[number - 90]
|
||||
elif 100 <= number <= 107:
|
||||
# 100-107 sets the background color from the high-intensity eight color palette.
|
||||
background_color = BRIGHT_COLOR_PALETTE[number - 100]
|
||||
elif number in (38, 39) and len(ansi_codes) >= 2 and ansi_codes[0] == 5:
|
||||
# 38;5;N is a text color in the 256 color mode palette,
|
||||
# 39;5;N is a background color in the 256 color mode palette.
|
||||
try:
|
||||
# Consume the 5 following 38 or 39.
|
||||
ansi_codes.pop(0)
|
||||
# Consume the 256 color mode color index.
|
||||
color_index = ansi_codes.pop(0)
|
||||
# Set the variable to the corresponding HTML/CSS color.
|
||||
if number == 38:
|
||||
text_color = EXTENDED_COLOR_PALETTE[color_index]
|
||||
elif number == 39:
|
||||
background_color = EXTENDED_COLOR_PALETTE[color_index]
|
||||
except (ValueError, IndexError):
|
||||
pass
|
||||
# Apply the 'faint' or 'inverse' text style
|
||||
# by manipulating the selected color(s).
|
||||
if text_color and is_inverse:
|
||||
# Use the text color as the background color and pick a
|
||||
# text color that will be visible on the resulting
|
||||
# background color.
|
||||
background_color = text_color
|
||||
text_color = select_text_color(*parse_hex_color(text_color))
|
||||
if text_color and is_faint:
|
||||
# Because I wasn't sure how to implement faint colors
|
||||
# based on normal colors I looked at how gnome-terminal
|
||||
# (my terminal of choice) handles this and it appears
|
||||
# to just pick a somewhat darker color.
|
||||
text_color = '#%02X%02X%02X' % tuple(
|
||||
max(0, n - 40) for n in parse_hex_color(text_color)
|
||||
)
|
||||
if text_color:
|
||||
styles['color'] = text_color
|
||||
if background_color:
|
||||
styles['background-color'] = background_color
|
||||
if styles:
|
||||
token = '<span style="%s">' % ';'.join(k + ':' + v for k, v in sorted(styles.items()))
|
||||
in_span = True
|
||||
else:
|
||||
token = ''
|
||||
else:
|
||||
token = html_encode(token)
|
||||
output.append(token)
|
||||
html = ''.join(output)
|
||||
html = encode_whitespace(html, tabsize)
|
||||
if code:
|
||||
html = '<code>%s</code>' % html
|
||||
return html
|
||||
|
||||
|
||||
def encode_whitespace(text, tabsize=4):
|
||||
"""
|
||||
Encode whitespace so that web browsers properly render it.
|
||||
|
||||
:param text: The plain text (a string).
|
||||
:param tabsize: Refer to :func:`str.expandtabs()` for details.
|
||||
:returns: The text converted to HTML (a string).
|
||||
|
||||
The purpose of this function is to encode whitespace in such a way that web
|
||||
browsers render the same whitespace regardless of whether 'preformatted'
|
||||
styling is used (by wrapping the text in a ``<pre>...</pre>`` element).
|
||||
|
||||
.. note:: While the string manipulation performed by this function is
|
||||
specifically intended not to corrupt the HTML generated by
|
||||
:func:`convert()` it definitely does have the potential to
|
||||
corrupt HTML from other sources. You have been warned :-).
|
||||
"""
|
||||
# Convert Windows line endings (CR+LF) to UNIX line endings (LF).
|
||||
text = text.replace('\r\n', '\n')
|
||||
# Convert UNIX line endings (LF) to HTML line endings (<br>).
|
||||
text = text.replace('\n', '<br>\n')
|
||||
# Convert tabs to spaces.
|
||||
text = text.expandtabs(tabsize)
|
||||
# Convert leading spaces (that is to say spaces at the start of the string
|
||||
# and/or directly after a line ending) into non-breaking spaces, otherwise
|
||||
# HTML rendering engines will simply ignore these spaces.
|
||||
text = re.sub(INDENT_PATTERN, encode_whitespace_cb, text)
|
||||
# The conversion of leading spaces we just did misses a corner case where a
|
||||
# line starts with an HTML tag but the first visible text is a space. Web
|
||||
# browsers seem to ignore these spaces, so we need to convert them.
|
||||
text = re.sub(TAG_INDENT_PATTERN, r'\1 ', text)
|
||||
# Convert runs of multiple spaces into non-breaking spaces to avoid HTML
|
||||
# rendering engines from visually collapsing runs of spaces into a single
|
||||
# space. We specifically don't replace single spaces for several reasons:
|
||||
# 1. We'd break the HTML emitted by convert() by replacing spaces
|
||||
# inside HTML elements (for example the spaces that separate
|
||||
# element names from attribute names).
|
||||
# 2. If every single space is replaced by a non-breaking space,
|
||||
# web browsers perform awkwardly unintuitive word wrapping.
|
||||
# 3. The HTML output would be bloated for no good reason.
|
||||
text = re.sub(' {2,}', encode_whitespace_cb, text)
|
||||
return text
|
||||
|
||||
|
||||
def encode_whitespace_cb(match):
|
||||
"""
|
||||
Replace runs of multiple spaces with non-breaking spaces.
|
||||
|
||||
:param match: A regular expression match object.
|
||||
:returns: The replacement string.
|
||||
|
||||
This function is used by func:`encode_whitespace()` as a callback for
|
||||
replacement using a regular expression pattern.
|
||||
"""
|
||||
return ' ' * len(match.group(0))
|
||||
|
||||
|
||||
def html_encode(text):
|
||||
"""
|
||||
Encode characters with a special meaning as HTML.
|
||||
|
||||
:param text: The plain text (a string).
|
||||
:returns: The text converted to HTML (a string).
|
||||
"""
|
||||
text = text.replace('&', '&')
|
||||
text = text.replace('<', '<')
|
||||
text = text.replace('>', '>')
|
||||
text = text.replace('"', '"')
|
||||
return text
|
||||
|
||||
|
||||
def parse_hex_color(value):
|
||||
"""
|
||||
Convert a CSS color in hexadecimal notation into its R, G, B components.
|
||||
|
||||
:param value: A CSS color in hexadecimal notation (a string like '#000000').
|
||||
:return: A tuple with three integers (with values between 0 and 255)
|
||||
corresponding to the R, G and B components of the color.
|
||||
:raises: :exc:`~exceptions.ValueError` on values that can't be parsed.
|
||||
"""
|
||||
if value.startswith('#'):
|
||||
value = value[1:]
|
||||
if len(value) == 3:
|
||||
return (
|
||||
int(value[0] * 2, 16),
|
||||
int(value[1] * 2, 16),
|
||||
int(value[2] * 2, 16),
|
||||
)
|
||||
elif len(value) == 6:
|
||||
return (
|
||||
int(value[0:2], 16),
|
||||
int(value[2:4], 16),
|
||||
int(value[4:6], 16),
|
||||
)
|
||||
else:
|
||||
raise ValueError()
|
||||
|
||||
|
||||
def select_text_color(r, g, b):
|
||||
"""
|
||||
Choose a suitable color for the inverse text style.
|
||||
|
||||
:param r: The amount of red (an integer between 0 and 255).
|
||||
:param g: The amount of green (an integer between 0 and 255).
|
||||
:param b: The amount of blue (an integer between 0 and 255).
|
||||
:returns: A CSS color in hexadecimal notation (a string).
|
||||
|
||||
In inverse mode the color that is normally used for the text is instead
|
||||
used for the background, however this can render the text unreadable. The
|
||||
purpose of :func:`select_text_color()` is to make an effort to select a
|
||||
suitable text color. Based on http://stackoverflow.com/a/3943023/112731.
|
||||
"""
|
||||
return '#000' if (r * 0.299 + g * 0.587 + b * 0.114) > 186 else '#FFF'
|
||||
|
||||
|
||||
class ColoredCronMailer(object):
|
||||
|
||||
"""
|
||||
Easy to use integration between :mod:`coloredlogs` and the UNIX ``cron`` daemon.
|
||||
|
||||
By using :class:`ColoredCronMailer` as a context manager in the command
|
||||
line interface of your Python program you make it trivially easy for users
|
||||
of your program to opt in to HTML output under ``cron``: The only thing the
|
||||
user needs to do is set ``CONTENT_TYPE="text/html"`` in their crontab!
|
||||
|
||||
Under the hood this requires quite a bit of magic and I must admit that I
|
||||
developed this code simply because I was curious whether it could even be
|
||||
done :-). It requires my :mod:`capturer` package which you can install
|
||||
using ``pip install 'coloredlogs[cron]'``. The ``[cron]`` extra will pull
|
||||
in the :mod:`capturer` 2.4 or newer which is required to capture the output
|
||||
while silencing it - otherwise you'd get duplicate output in the emails
|
||||
sent by ``cron``.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize output capturing when running under ``cron`` with the correct configuration."""
|
||||
self.is_enabled = 'text/html' in os.environ.get('CONTENT_TYPE', 'text/plain')
|
||||
self.is_silent = False
|
||||
if self.is_enabled:
|
||||
# We import capturer here so that the coloredlogs[cron] extra
|
||||
# isn't required to use the other functions in this module.
|
||||
from capturer import CaptureOutput
|
||||
self.capturer = CaptureOutput(merged=True, relay=False)
|
||||
|
||||
def __enter__(self):
|
||||
"""Start capturing output (when applicable)."""
|
||||
if self.is_enabled:
|
||||
self.capturer.__enter__()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
|
||||
"""Stop capturing output and convert the output to HTML (when applicable)."""
|
||||
if self.is_enabled:
|
||||
if not self.is_silent:
|
||||
# Only call output() when we captured something useful.
|
||||
text = self.capturer.get_text()
|
||||
if text and not text.isspace():
|
||||
output(convert(text))
|
||||
self.capturer.__exit__(exc_type, exc_value, traceback)
|
||||
|
||||
def silence(self):
|
||||
"""
|
||||
Tell :func:`__exit__()` to swallow all output (things will be silent).
|
||||
|
||||
This can be useful when a Python program is written in such a way that
|
||||
it has already produced output by the time it becomes apparent that
|
||||
nothing useful can be done (say in a cron job that runs every few
|
||||
minutes :-p). By calling :func:`silence()` the output can be swallowed
|
||||
retroactively, avoiding useless emails from ``cron``.
|
||||
"""
|
||||
self.is_silent = True
|
||||
@@ -0,0 +1,310 @@
|
||||
# Mapping of ANSI color codes to HTML/CSS colors.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: January 14, 2018
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Mapping of ANSI color codes to HTML/CSS colors."""
|
||||
|
||||
EIGHT_COLOR_PALETTE = (
|
||||
'#010101', # black
|
||||
'#DE382B', # red
|
||||
'#39B54A', # green
|
||||
'#FFC706', # yellow
|
||||
'#006FB8', # blue
|
||||
'#762671', # magenta
|
||||
'#2CB5E9', # cyan
|
||||
'#CCC', # white
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping basic color codes to CSS colors.
|
||||
|
||||
The items in this tuple correspond to the eight basic color codes for black,
|
||||
red, green, yellow, blue, magenta, cyan and white as defined in the original
|
||||
standard for ANSI escape sequences. The CSS colors are based on the `Ubuntu
|
||||
color scheme`_ described on Wikipedia and they are encoded as hexadecimal
|
||||
values to get the shortest strings, which reduces the size (in bytes) of
|
||||
conversion output.
|
||||
|
||||
.. _Ubuntu color scheme: https://en.wikipedia.org/wiki/ANSI_escape_code#Colors
|
||||
"""
|
||||
|
||||
BRIGHT_COLOR_PALETTE = (
|
||||
'#808080', # black
|
||||
'#F00', # red
|
||||
'#0F0', # green
|
||||
'#FF0', # yellow
|
||||
'#00F', # blue
|
||||
'#F0F', # magenta
|
||||
'#0FF', # cyan
|
||||
'#FFF', # white
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping bright color codes to CSS colors.
|
||||
|
||||
This tuple maps the bright color variants of :data:`EIGHT_COLOR_PALETTE`.
|
||||
"""
|
||||
|
||||
EXTENDED_COLOR_PALETTE = (
|
||||
'#000000',
|
||||
'#800000',
|
||||
'#008000',
|
||||
'#808000',
|
||||
'#000080',
|
||||
'#800080',
|
||||
'#008080',
|
||||
'#C0C0C0',
|
||||
'#808080',
|
||||
'#FF0000',
|
||||
'#00FF00',
|
||||
'#FFFF00',
|
||||
'#0000FF',
|
||||
'#FF00FF',
|
||||
'#00FFFF',
|
||||
'#FFFFFF',
|
||||
'#000000',
|
||||
'#00005F',
|
||||
'#000087',
|
||||
'#0000AF',
|
||||
'#0000D7',
|
||||
'#0000FF',
|
||||
'#005F00',
|
||||
'#005F5F',
|
||||
'#005F87',
|
||||
'#005FAF',
|
||||
'#005FD7',
|
||||
'#005FFF',
|
||||
'#008700',
|
||||
'#00875F',
|
||||
'#008787',
|
||||
'#0087AF',
|
||||
'#0087D7',
|
||||
'#0087FF',
|
||||
'#00AF00',
|
||||
'#00AF5F',
|
||||
'#00AF87',
|
||||
'#00AFAF',
|
||||
'#00AFD7',
|
||||
'#00AFFF',
|
||||
'#00D700',
|
||||
'#00D75F',
|
||||
'#00D787',
|
||||
'#00D7AF',
|
||||
'#00D7D7',
|
||||
'#00D7FF',
|
||||
'#00FF00',
|
||||
'#00FF5F',
|
||||
'#00FF87',
|
||||
'#00FFAF',
|
||||
'#00FFD7',
|
||||
'#00FFFF',
|
||||
'#5F0000',
|
||||
'#5F005F',
|
||||
'#5F0087',
|
||||
'#5F00AF',
|
||||
'#5F00D7',
|
||||
'#5F00FF',
|
||||
'#5F5F00',
|
||||
'#5F5F5F',
|
||||
'#5F5F87',
|
||||
'#5F5FAF',
|
||||
'#5F5FD7',
|
||||
'#5F5FFF',
|
||||
'#5F8700',
|
||||
'#5F875F',
|
||||
'#5F8787',
|
||||
'#5F87AF',
|
||||
'#5F87D7',
|
||||
'#5F87FF',
|
||||
'#5FAF00',
|
||||
'#5FAF5F',
|
||||
'#5FAF87',
|
||||
'#5FAFAF',
|
||||
'#5FAFD7',
|
||||
'#5FAFFF',
|
||||
'#5FD700',
|
||||
'#5FD75F',
|
||||
'#5FD787',
|
||||
'#5FD7AF',
|
||||
'#5FD7D7',
|
||||
'#5FD7FF',
|
||||
'#5FFF00',
|
||||
'#5FFF5F',
|
||||
'#5FFF87',
|
||||
'#5FFFAF',
|
||||
'#5FFFD7',
|
||||
'#5FFFFF',
|
||||
'#870000',
|
||||
'#87005F',
|
||||
'#870087',
|
||||
'#8700AF',
|
||||
'#8700D7',
|
||||
'#8700FF',
|
||||
'#875F00',
|
||||
'#875F5F',
|
||||
'#875F87',
|
||||
'#875FAF',
|
||||
'#875FD7',
|
||||
'#875FFF',
|
||||
'#878700',
|
||||
'#87875F',
|
||||
'#878787',
|
||||
'#8787AF',
|
||||
'#8787D7',
|
||||
'#8787FF',
|
||||
'#87AF00',
|
||||
'#87AF5F',
|
||||
'#87AF87',
|
||||
'#87AFAF',
|
||||
'#87AFD7',
|
||||
'#87AFFF',
|
||||
'#87D700',
|
||||
'#87D75F',
|
||||
'#87D787',
|
||||
'#87D7AF',
|
||||
'#87D7D7',
|
||||
'#87D7FF',
|
||||
'#87FF00',
|
||||
'#87FF5F',
|
||||
'#87FF87',
|
||||
'#87FFAF',
|
||||
'#87FFD7',
|
||||
'#87FFFF',
|
||||
'#AF0000',
|
||||
'#AF005F',
|
||||
'#AF0087',
|
||||
'#AF00AF',
|
||||
'#AF00D7',
|
||||
'#AF00FF',
|
||||
'#AF5F00',
|
||||
'#AF5F5F',
|
||||
'#AF5F87',
|
||||
'#AF5FAF',
|
||||
'#AF5FD7',
|
||||
'#AF5FFF',
|
||||
'#AF8700',
|
||||
'#AF875F',
|
||||
'#AF8787',
|
||||
'#AF87AF',
|
||||
'#AF87D7',
|
||||
'#AF87FF',
|
||||
'#AFAF00',
|
||||
'#AFAF5F',
|
||||
'#AFAF87',
|
||||
'#AFAFAF',
|
||||
'#AFAFD7',
|
||||
'#AFAFFF',
|
||||
'#AFD700',
|
||||
'#AFD75F',
|
||||
'#AFD787',
|
||||
'#AFD7AF',
|
||||
'#AFD7D7',
|
||||
'#AFD7FF',
|
||||
'#AFFF00',
|
||||
'#AFFF5F',
|
||||
'#AFFF87',
|
||||
'#AFFFAF',
|
||||
'#AFFFD7',
|
||||
'#AFFFFF',
|
||||
'#D70000',
|
||||
'#D7005F',
|
||||
'#D70087',
|
||||
'#D700AF',
|
||||
'#D700D7',
|
||||
'#D700FF',
|
||||
'#D75F00',
|
||||
'#D75F5F',
|
||||
'#D75F87',
|
||||
'#D75FAF',
|
||||
'#D75FD7',
|
||||
'#D75FFF',
|
||||
'#D78700',
|
||||
'#D7875F',
|
||||
'#D78787',
|
||||
'#D787AF',
|
||||
'#D787D7',
|
||||
'#D787FF',
|
||||
'#D7AF00',
|
||||
'#D7AF5F',
|
||||
'#D7AF87',
|
||||
'#D7AFAF',
|
||||
'#D7AFD7',
|
||||
'#D7AFFF',
|
||||
'#D7D700',
|
||||
'#D7D75F',
|
||||
'#D7D787',
|
||||
'#D7D7AF',
|
||||
'#D7D7D7',
|
||||
'#D7D7FF',
|
||||
'#D7FF00',
|
||||
'#D7FF5F',
|
||||
'#D7FF87',
|
||||
'#D7FFAF',
|
||||
'#D7FFD7',
|
||||
'#D7FFFF',
|
||||
'#FF0000',
|
||||
'#FF005F',
|
||||
'#FF0087',
|
||||
'#FF00AF',
|
||||
'#FF00D7',
|
||||
'#FF00FF',
|
||||
'#FF5F00',
|
||||
'#FF5F5F',
|
||||
'#FF5F87',
|
||||
'#FF5FAF',
|
||||
'#FF5FD7',
|
||||
'#FF5FFF',
|
||||
'#FF8700',
|
||||
'#FF875F',
|
||||
'#FF8787',
|
||||
'#FF87AF',
|
||||
'#FF87D7',
|
||||
'#FF87FF',
|
||||
'#FFAF00',
|
||||
'#FFAF5F',
|
||||
'#FFAF87',
|
||||
'#FFAFAF',
|
||||
'#FFAFD7',
|
||||
'#FFAFFF',
|
||||
'#FFD700',
|
||||
'#FFD75F',
|
||||
'#FFD787',
|
||||
'#FFD7AF',
|
||||
'#FFD7D7',
|
||||
'#FFD7FF',
|
||||
'#FFFF00',
|
||||
'#FFFF5F',
|
||||
'#FFFF87',
|
||||
'#FFFFAF',
|
||||
'#FFFFD7',
|
||||
'#FFFFFF',
|
||||
'#080808',
|
||||
'#121212',
|
||||
'#1C1C1C',
|
||||
'#262626',
|
||||
'#303030',
|
||||
'#3A3A3A',
|
||||
'#444444',
|
||||
'#4E4E4E',
|
||||
'#585858',
|
||||
'#626262',
|
||||
'#6C6C6C',
|
||||
'#767676',
|
||||
'#808080',
|
||||
'#8A8A8A',
|
||||
'#949494',
|
||||
'#9E9E9E',
|
||||
'#A8A8A8',
|
||||
'#B2B2B2',
|
||||
'#BCBCBC',
|
||||
'#C6C6C6',
|
||||
'#D0D0D0',
|
||||
'#DADADA',
|
||||
'#E4E4E4',
|
||||
'#EEEEEE',
|
||||
)
|
||||
"""
|
||||
A tuple of strings mapping 256 color mode color codes to CSS colors.
|
||||
|
||||
The items in this tuple correspond to the color codes in the 256 color mode palette.
|
||||
"""
|
||||
@@ -0,0 +1,49 @@
|
||||
# Demonstration of the coloredlogs package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: January 14, 2018
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""A simple demonstration of the `coloredlogs` package."""
|
||||
|
||||
# Standard library modules.
|
||||
import os
|
||||
import time
|
||||
|
||||
# Modules included in our package.
|
||||
import coloredlogs
|
||||
|
||||
# If my verbose logger is installed, we'll use that for the demo.
|
||||
try:
|
||||
from verboselogs import VerboseLogger as getLogger
|
||||
except ImportError:
|
||||
from logging import getLogger
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = getLogger(__name__)
|
||||
|
||||
DEMO_DELAY = float(os.environ.get('COLOREDLOGS_DEMO_DELAY', '1'))
|
||||
"""The number of seconds between each message emitted by :func:`demonstrate_colored_logging()`."""
|
||||
|
||||
|
||||
def demonstrate_colored_logging():
|
||||
"""Interactively demonstrate the :mod:`coloredlogs` package."""
|
||||
# Determine the available logging levels and order them by numeric value.
|
||||
decorated_levels = []
|
||||
defined_levels = coloredlogs.find_defined_levels()
|
||||
normalizer = coloredlogs.NameNormalizer()
|
||||
for name, level in defined_levels.items():
|
||||
if name != 'NOTSET':
|
||||
item = (level, normalizer.normalize_name(name))
|
||||
if item not in decorated_levels:
|
||||
decorated_levels.append(item)
|
||||
ordered_levels = sorted(decorated_levels)
|
||||
# Initialize colored output to the terminal, default to the most
|
||||
# verbose logging level but enable the user the customize it.
|
||||
coloredlogs.install(level=os.environ.get('COLOREDLOGS_LOG_LEVEL', ordered_levels[0][1]))
|
||||
# Print some examples with different timestamps.
|
||||
for level, name in ordered_levels:
|
||||
log_method = getattr(logger, name, None)
|
||||
if log_method:
|
||||
log_method("message with level %s (%i)", name, level)
|
||||
time.sleep(DEMO_DELAY)
|
||||
@@ -0,0 +1,292 @@
|
||||
# Easy to use system logging for Python's logging module.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: December 10, 2020
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""
|
||||
Easy to use UNIX system logging for Python's :mod:`logging` module.
|
||||
|
||||
Admittedly system logging has little to do with colored terminal output, however:
|
||||
|
||||
- The `coloredlogs` package is my attempt to do Python logging right and system
|
||||
logging is an important part of that equation.
|
||||
|
||||
- I've seen a surprising number of quirks and mistakes in system logging done
|
||||
in Python, for example including ``%(asctime)s`` in a format string (the
|
||||
system logging daemon is responsible for adding timestamps and thus you end
|
||||
up with duplicate timestamps that make the logs awful to read :-).
|
||||
|
||||
- The ``%(programname)s`` filter originated in my system logging code and I
|
||||
wanted it in `coloredlogs` so the step to include this module wasn't that big.
|
||||
|
||||
- As a bonus this Python module now has a test suite and proper documentation.
|
||||
|
||||
So there :-P. Go take a look at :func:`enable_system_logging()`.
|
||||
"""
|
||||
|
||||
# Standard library modules.
|
||||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly import coerce_boolean
|
||||
from humanfriendly.compat import on_macos, on_windows
|
||||
|
||||
# Modules included in our package.
|
||||
from coloredlogs import (
|
||||
DEFAULT_LOG_LEVEL,
|
||||
ProgramNameFilter,
|
||||
adjust_level,
|
||||
find_program_name,
|
||||
level_to_number,
|
||||
replace_handler,
|
||||
)
|
||||
|
||||
LOG_DEVICE_MACOSX = '/var/run/syslog'
|
||||
"""The pathname of the log device on Mac OS X (a string)."""
|
||||
|
||||
LOG_DEVICE_UNIX = '/dev/log'
|
||||
"""The pathname of the log device on Linux and most other UNIX systems (a string)."""
|
||||
|
||||
DEFAULT_LOG_FORMAT = '%(programname)s[%(process)d]: %(levelname)s %(message)s'
|
||||
"""
|
||||
The default format for log messages sent to the system log (a string).
|
||||
|
||||
The ``%(programname)s`` format requires :class:`~coloredlogs.ProgramNameFilter`
|
||||
but :func:`enable_system_logging()` takes care of this for you.
|
||||
|
||||
The ``name[pid]:`` construct (specifically the colon) in the format allows
|
||||
rsyslogd_ to extract the ``$programname`` from each log message, which in turn
|
||||
allows configuration files in ``/etc/rsyslog.d/*.conf`` to filter these log
|
||||
messages to a separate log file (if the need arises).
|
||||
|
||||
.. _rsyslogd: https://en.wikipedia.org/wiki/Rsyslog
|
||||
"""
|
||||
|
||||
# Initialize a logger for this module.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SystemLogging(object):
|
||||
|
||||
"""Context manager to enable system logging."""
|
||||
|
||||
def __init__(self, *args, **kw):
|
||||
"""
|
||||
Initialize a :class:`SystemLogging` object.
|
||||
|
||||
:param args: Positional arguments to :func:`enable_system_logging()`.
|
||||
:param kw: Keyword arguments to :func:`enable_system_logging()`.
|
||||
"""
|
||||
self.args = args
|
||||
self.kw = kw
|
||||
self.handler = None
|
||||
|
||||
def __enter__(self):
|
||||
"""Enable system logging when entering the context."""
|
||||
if self.handler is None:
|
||||
self.handler = enable_system_logging(*self.args, **self.kw)
|
||||
return self.handler
|
||||
|
||||
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
|
||||
"""
|
||||
Disable system logging when leaving the context.
|
||||
|
||||
.. note:: If an exception is being handled when we leave the context a
|
||||
warning message including traceback is logged *before* system
|
||||
logging is disabled.
|
||||
"""
|
||||
if self.handler is not None:
|
||||
if exc_type is not None:
|
||||
logger.warning("Disabling system logging due to unhandled exception!", exc_info=True)
|
||||
(self.kw.get('logger') or logging.getLogger()).removeHandler(self.handler)
|
||||
self.handler = None
|
||||
|
||||
|
||||
def enable_system_logging(programname=None, fmt=None, logger=None, reconfigure=True, **kw):
|
||||
"""
|
||||
Redirect :mod:`logging` messages to the system log (e.g. ``/var/log/syslog``).
|
||||
|
||||
:param programname: The program name to embed in log messages (a string, defaults
|
||||
to the result of :func:`~coloredlogs.find_program_name()`).
|
||||
:param fmt: The log format for system log messages (a string, defaults to
|
||||
:data:`DEFAULT_LOG_FORMAT`).
|
||||
:param logger: The logger to which the :class:`~logging.handlers.SysLogHandler`
|
||||
should be connected (defaults to the root logger).
|
||||
:param level: The logging level for the :class:`~logging.handlers.SysLogHandler`
|
||||
(defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced
|
||||
using :func:`~coloredlogs.level_to_number()`.
|
||||
:param reconfigure: If :data:`True` (the default) multiple calls to
|
||||
:func:`enable_system_logging()` will each override
|
||||
the previous configuration.
|
||||
:param kw: Refer to :func:`connect_to_syslog()`.
|
||||
:returns: A :class:`~logging.handlers.SysLogHandler` object or
|
||||
:data:`None`. If an existing handler is found and `reconfigure`
|
||||
is :data:`False` the existing handler object is returned. If the
|
||||
connection to the system logging daemon fails :data:`None` is
|
||||
returned.
|
||||
|
||||
As of release 15.0 this function uses :func:`is_syslog_supported()` to
|
||||
check whether system logging is supported and appropriate before it's
|
||||
enabled.
|
||||
|
||||
.. note:: When the logger's effective level is too restrictive it is
|
||||
relaxed (refer to `notes about log levels`_ for details).
|
||||
"""
|
||||
# Check whether system logging is supported / appropriate.
|
||||
if not is_syslog_supported():
|
||||
return None
|
||||
# Provide defaults for omitted arguments.
|
||||
programname = programname or find_program_name()
|
||||
logger = logger or logging.getLogger()
|
||||
fmt = fmt or DEFAULT_LOG_FORMAT
|
||||
level = level_to_number(kw.get('level', DEFAULT_LOG_LEVEL))
|
||||
# Check whether system logging is already enabled.
|
||||
handler, logger = replace_handler(logger, match_syslog_handler, reconfigure)
|
||||
# Make sure reconfiguration is allowed or not relevant.
|
||||
if not (handler and not reconfigure):
|
||||
# Create a system logging handler.
|
||||
handler = connect_to_syslog(**kw)
|
||||
# Make sure the handler was successfully created.
|
||||
if handler:
|
||||
# Enable the use of %(programname)s.
|
||||
ProgramNameFilter.install(handler=handler, fmt=fmt, programname=programname)
|
||||
# Connect the formatter, handler and logger.
|
||||
handler.setFormatter(logging.Formatter(fmt))
|
||||
logger.addHandler(handler)
|
||||
# Adjust the level of the selected logger.
|
||||
adjust_level(logger, level)
|
||||
return handler
|
||||
|
||||
|
||||
def connect_to_syslog(address=None, facility=None, level=None):
|
||||
"""
|
||||
Create a :class:`~logging.handlers.SysLogHandler`.
|
||||
|
||||
:param address: The device file or network address of the system logging
|
||||
daemon (a string or tuple, defaults to the result of
|
||||
:func:`find_syslog_address()`).
|
||||
:param facility: Refer to :class:`~logging.handlers.SysLogHandler`.
|
||||
Defaults to ``LOG_USER``.
|
||||
:param level: The logging level for the :class:`~logging.handlers.SysLogHandler`
|
||||
(defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced
|
||||
using :func:`~coloredlogs.level_to_number()`.
|
||||
:returns: A :class:`~logging.handlers.SysLogHandler` object or :data:`None` (if the
|
||||
system logging daemon is unavailable).
|
||||
|
||||
The process of connecting to the system logging daemon goes as follows:
|
||||
|
||||
- The following two socket types are tried (in decreasing preference):
|
||||
|
||||
1. :data:`~socket.SOCK_RAW` avoids truncation of log messages but may
|
||||
not be supported.
|
||||
2. :data:`~socket.SOCK_STREAM` (TCP) supports longer messages than the
|
||||
default (which is UDP).
|
||||
"""
|
||||
if not address:
|
||||
address = find_syslog_address()
|
||||
if facility is None:
|
||||
facility = logging.handlers.SysLogHandler.LOG_USER
|
||||
if level is None:
|
||||
level = DEFAULT_LOG_LEVEL
|
||||
for socktype in socket.SOCK_RAW, socket.SOCK_STREAM, None:
|
||||
kw = dict(facility=facility, address=address)
|
||||
if socktype is not None:
|
||||
kw['socktype'] = socktype
|
||||
try:
|
||||
handler = logging.handlers.SysLogHandler(**kw)
|
||||
except IOError:
|
||||
# IOError is a superclass of socket.error which can be raised if the system
|
||||
# logging daemon is unavailable.
|
||||
pass
|
||||
else:
|
||||
handler.setLevel(level_to_number(level))
|
||||
return handler
|
||||
|
||||
|
||||
def find_syslog_address():
|
||||
"""
|
||||
Find the most suitable destination for system log messages.
|
||||
|
||||
:returns: The pathname of a log device (a string) or an address/port tuple as
|
||||
supported by :class:`~logging.handlers.SysLogHandler`.
|
||||
|
||||
On Mac OS X this prefers :data:`LOG_DEVICE_MACOSX`, after that :data:`LOG_DEVICE_UNIX`
|
||||
is checked for existence. If both of these device files don't exist the default used
|
||||
by :class:`~logging.handlers.SysLogHandler` is returned.
|
||||
"""
|
||||
if sys.platform == 'darwin' and os.path.exists(LOG_DEVICE_MACOSX):
|
||||
return LOG_DEVICE_MACOSX
|
||||
elif os.path.exists(LOG_DEVICE_UNIX):
|
||||
return LOG_DEVICE_UNIX
|
||||
else:
|
||||
return 'localhost', logging.handlers.SYSLOG_UDP_PORT
|
||||
|
||||
|
||||
def is_syslog_supported():
|
||||
"""
|
||||
Determine whether system logging is supported.
|
||||
|
||||
:returns:
|
||||
|
||||
:data:`True` if system logging is supported and can be enabled,
|
||||
:data:`False` if system logging is not supported or there are good
|
||||
reasons for not enabling it.
|
||||
|
||||
The decision making process here is as follows:
|
||||
|
||||
Override
|
||||
If the environment variable ``$COLOREDLOGS_SYSLOG`` is set it is evaluated
|
||||
using :func:`~humanfriendly.coerce_boolean()` and the resulting value
|
||||
overrides the platform detection discussed below, this allows users to
|
||||
override the decision making process if they disagree / know better.
|
||||
|
||||
Linux / UNIX
|
||||
On systems that are not Windows or MacOS (see below) we assume UNIX which
|
||||
means either syslog is available or sending a bunch of UDP packets to
|
||||
nowhere won't hurt anyone...
|
||||
|
||||
Microsoft Windows
|
||||
Over the years I've had multiple reports of :pypi:`coloredlogs` spewing
|
||||
extremely verbose errno 10057 warning messages to the console (once for
|
||||
each log message I suppose) so I now assume it a default that
|
||||
"syslog-style system logging" is not generally available on Windows.
|
||||
|
||||
Apple MacOS
|
||||
There's cPython issue `#38780`_ which seems to result in a fatal exception
|
||||
when the Python interpreter shuts down. This is (way) worse than not
|
||||
having system logging enabled. The error message mentioned in `#38780`_
|
||||
has actually been following me around for years now, see for example:
|
||||
|
||||
- https://github.com/xolox/python-rotate-backups/issues/9 mentions Docker
|
||||
images implying Linux, so not strictly the same as `#38780`_.
|
||||
|
||||
- https://github.com/xolox/python-npm-accel/issues/4 is definitely related
|
||||
to `#38780`_ and is what eventually prompted me to add the
|
||||
:func:`is_syslog_supported()` logic.
|
||||
|
||||
.. _#38780: https://bugs.python.org/issue38780
|
||||
"""
|
||||
override = os.environ.get("COLOREDLOGS_SYSLOG")
|
||||
if override is not None:
|
||||
return coerce_boolean(override)
|
||||
else:
|
||||
return not (on_windows() or on_macos())
|
||||
|
||||
|
||||
def match_syslog_handler(handler):
|
||||
"""
|
||||
Identify system logging handlers.
|
||||
|
||||
:param handler: The :class:`~logging.Handler` class to check.
|
||||
:returns: :data:`True` if the handler is a
|
||||
:class:`~logging.handlers.SysLogHandler`,
|
||||
:data:`False` otherwise.
|
||||
|
||||
This function can be used as a callback for :func:`.find_handler()`.
|
||||
"""
|
||||
return isinstance(handler, logging.handlers.SysLogHandler)
|
||||
@@ -0,0 +1,673 @@
|
||||
# Automated tests for the `coloredlogs' package.
|
||||
#
|
||||
# Author: Peter Odding <peter@peterodding.com>
|
||||
# Last Change: June 11, 2021
|
||||
# URL: https://coloredlogs.readthedocs.io
|
||||
|
||||
"""Automated tests for the `coloredlogs` package."""
|
||||
|
||||
# Standard library modules.
|
||||
import contextlib
|
||||
import logging
|
||||
import logging.handlers
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
# External dependencies.
|
||||
from humanfriendly.compat import StringIO
|
||||
from humanfriendly.terminal import ANSI_COLOR_CODES, ANSI_CSI, ansi_style, ansi_wrap
|
||||
from humanfriendly.testing import PatchedAttribute, PatchedItem, TestCase, retry
|
||||
from humanfriendly.text import format, random_string
|
||||
|
||||
# The module we're testing.
|
||||
import coloredlogs
|
||||
import coloredlogs.cli
|
||||
from coloredlogs import (
|
||||
CHROOT_FILES,
|
||||
ColoredFormatter,
|
||||
NameNormalizer,
|
||||
decrease_verbosity,
|
||||
find_defined_levels,
|
||||
find_handler,
|
||||
find_hostname,
|
||||
find_program_name,
|
||||
find_username,
|
||||
get_level,
|
||||
increase_verbosity,
|
||||
install,
|
||||
is_verbose,
|
||||
level_to_number,
|
||||
match_stream_handler,
|
||||
parse_encoded_styles,
|
||||
set_level,
|
||||
walk_propagation_tree,
|
||||
)
|
||||
from coloredlogs.demo import demonstrate_colored_logging
|
||||
from coloredlogs.syslog import SystemLogging, is_syslog_supported, match_syslog_handler
|
||||
from coloredlogs.converter import (
|
||||
ColoredCronMailer,
|
||||
EIGHT_COLOR_PALETTE,
|
||||
capture,
|
||||
convert,
|
||||
)
|
||||
|
||||
# External test dependencies.
|
||||
from capturer import CaptureOutput
|
||||
from verboselogs import VerboseLogger
|
||||
|
||||
# Compiled regular expression that matches a single line of output produced by
|
||||
# the default log format (does not include matching of ANSI escape sequences).
|
||||
PLAIN_TEXT_PATTERN = re.compile(r'''
|
||||
(?P<date> \d{4}-\d{2}-\d{2} )
|
||||
\s (?P<time> \d{2}:\d{2}:\d{2} )
|
||||
\s (?P<hostname> \S+ )
|
||||
\s (?P<logger_name> \w+ )
|
||||
\[ (?P<process_id> \d+ ) \]
|
||||
\s (?P<severity> [A-Z]+ )
|
||||
\s (?P<message> .* )
|
||||
''', re.VERBOSE)
|
||||
|
||||
# Compiled regular expression that matches a single line of output produced by
|
||||
# the default log format with milliseconds=True.
|
||||
PATTERN_INCLUDING_MILLISECONDS = re.compile(r'''
|
||||
(?P<date> \d{4}-\d{2}-\d{2} )
|
||||
\s (?P<time> \d{2}:\d{2}:\d{2},\d{3} )
|
||||
\s (?P<hostname> \S+ )
|
||||
\s (?P<logger_name> \w+ )
|
||||
\[ (?P<process_id> \d+ ) \]
|
||||
\s (?P<severity> [A-Z]+ )
|
||||
\s (?P<message> .* )
|
||||
''', re.VERBOSE)
|
||||
|
||||
|
||||
def setUpModule():
|
||||
"""Speed up the tests by disabling the demo's artificial delay."""
|
||||
os.environ['COLOREDLOGS_DEMO_DELAY'] = '0'
|
||||
coloredlogs.demo.DEMO_DELAY = 0
|
||||
|
||||
|
||||
class ColoredLogsTestCase(TestCase):
|
||||
|
||||
"""Container for the `coloredlogs` tests."""
|
||||
|
||||
def find_system_log(self):
|
||||
"""Find the system log file or skip the current test."""
|
||||
filename = ('/var/log/system.log' if sys.platform == 'darwin' else (
|
||||
'/var/log/syslog' if 'linux' in sys.platform else None
|
||||
))
|
||||
if not filename:
|
||||
self.skipTest("Location of system log file unknown!")
|
||||
elif not os.path.isfile(filename):
|
||||
self.skipTest("System log file not found! (%s)" % filename)
|
||||
elif not os.access(filename, os.R_OK):
|
||||
self.skipTest("Insufficient permissions to read system log file! (%s)" % filename)
|
||||
else:
|
||||
return filename
|
||||
|
||||
def test_level_to_number(self):
|
||||
"""Make sure :func:`level_to_number()` works as intended."""
|
||||
# Make sure the default levels are translated as expected.
|
||||
assert level_to_number('debug') == logging.DEBUG
|
||||
assert level_to_number('info') == logging.INFO
|
||||
assert level_to_number('warning') == logging.WARNING
|
||||
assert level_to_number('error') == logging.ERROR
|
||||
assert level_to_number('fatal') == logging.FATAL
|
||||
# Make sure bogus level names don't blow up.
|
||||
assert level_to_number('bogus-level') == logging.INFO
|
||||
|
||||
def test_find_hostname(self):
|
||||
"""Make sure :func:`~find_hostname()` works correctly."""
|
||||
assert find_hostname()
|
||||
# Create a temporary file as a placeholder for e.g. /etc/debian_chroot.
|
||||
fd, temporary_file = tempfile.mkstemp()
|
||||
try:
|
||||
with open(temporary_file, 'w') as handle:
|
||||
handle.write('first line\n')
|
||||
handle.write('second line\n')
|
||||
CHROOT_FILES.insert(0, temporary_file)
|
||||
# Make sure the chroot file is being read.
|
||||
assert find_hostname() == 'first line'
|
||||
finally:
|
||||
# Clean up.
|
||||
CHROOT_FILES.pop(0)
|
||||
os.unlink(temporary_file)
|
||||
# Test that unreadable chroot files don't break coloredlogs.
|
||||
try:
|
||||
CHROOT_FILES.insert(0, temporary_file)
|
||||
# Make sure that a usable value is still produced.
|
||||
assert find_hostname()
|
||||
finally:
|
||||
# Clean up.
|
||||
CHROOT_FILES.pop(0)
|
||||
|
||||
def test_host_name_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.HostNameFilter()`."""
|
||||
install(fmt='%(hostname)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_hostname() in output
|
||||
|
||||
def test_program_name_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.ProgramNameFilter()`."""
|
||||
install(fmt='%(programname)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_program_name() in output
|
||||
|
||||
def test_username_filter(self):
|
||||
"""Make sure :func:`install()` integrates with :class:`~coloredlogs.UserNameFilter()`."""
|
||||
install(fmt='%(username)s')
|
||||
with CaptureOutput() as capturer:
|
||||
logging.info("A truly insignificant message ..")
|
||||
output = capturer.get_text()
|
||||
assert find_username() in output
|
||||
|
||||
def test_system_logging(self):
|
||||
"""Make sure the :class:`coloredlogs.syslog.SystemLogging` context manager works."""
|
||||
system_log_file = self.find_system_log()
|
||||
expected_message = random_string(50)
|
||||
with SystemLogging(programname='coloredlogs-test-suite') as syslog:
|
||||
if not syslog:
|
||||
return self.skipTest("couldn't connect to syslog daemon")
|
||||
# When I tried out the system logging support on macOS 10.13.1 on
|
||||
# 2018-01-05 I found that while WARNING and ERROR messages show up
|
||||
# in the system log DEBUG and INFO messages don't. This explains
|
||||
# the importance of the level of the log message below.
|
||||
logging.error("%s", expected_message)
|
||||
# Retry the following assertion (for up to 60 seconds) to give the
|
||||
# logging daemon time to write our log message to disk. This
|
||||
# appears to be needed on MacOS workers on Travis CI, see:
|
||||
# https://travis-ci.org/xolox/python-coloredlogs/jobs/325245853
|
||||
retry(lambda: check_contents(system_log_file, expected_message, True))
|
||||
|
||||
def test_system_logging_override(self):
|
||||
"""Make sure the :class:`coloredlogs.syslog.is_syslog_supported` respects the override."""
|
||||
with PatchedItem(os.environ, 'COLOREDLOGS_SYSLOG', 'true'):
|
||||
assert is_syslog_supported() is True
|
||||
with PatchedItem(os.environ, 'COLOREDLOGS_SYSLOG', 'false'):
|
||||
assert is_syslog_supported() is False
|
||||
|
||||
def test_syslog_shortcut_simple(self):
|
||||
"""Make sure that ``coloredlogs.install(syslog=True)`` works."""
|
||||
system_log_file = self.find_system_log()
|
||||
expected_message = random_string(50)
|
||||
with cleanup_handlers():
|
||||
# See test_system_logging() for the importance of this log level.
|
||||
coloredlogs.install(syslog=True)
|
||||
logging.error("%s", expected_message)
|
||||
# See the comments in test_system_logging() on why this is retried.
|
||||
retry(lambda: check_contents(system_log_file, expected_message, True))
|
||||
|
||||
def test_syslog_shortcut_enhanced(self):
|
||||
"""Make sure that ``coloredlogs.install(syslog='warning')`` works."""
|
||||
system_log_file = self.find_system_log()
|
||||
the_expected_message = random_string(50)
|
||||
not_an_expected_message = random_string(50)
|
||||
with cleanup_handlers():
|
||||
# See test_system_logging() for the importance of these log levels.
|
||||
coloredlogs.install(syslog='error')
|
||||
logging.warning("%s", not_an_expected_message)
|
||||
logging.error("%s", the_expected_message)
|
||||
# See the comments in test_system_logging() on why this is retried.
|
||||
retry(lambda: check_contents(system_log_file, the_expected_message, True))
|
||||
retry(lambda: check_contents(system_log_file, not_an_expected_message, False))
|
||||
|
||||
def test_name_normalization(self):
|
||||
"""Make sure :class:`~coloredlogs.NameNormalizer` works as intended."""
|
||||
nn = NameNormalizer()
|
||||
for canonical_name in ['debug', 'info', 'warning', 'error', 'critical']:
|
||||
assert nn.normalize_name(canonical_name) == canonical_name
|
||||
assert nn.normalize_name(canonical_name.upper()) == canonical_name
|
||||
assert nn.normalize_name('warn') == 'warning'
|
||||
assert nn.normalize_name('fatal') == 'critical'
|
||||
|
||||
def test_style_parsing(self):
|
||||
"""Make sure :func:`~coloredlogs.parse_encoded_styles()` works as intended."""
|
||||
encoded_styles = 'debug=green;warning=yellow;error=red;critical=red,bold'
|
||||
decoded_styles = parse_encoded_styles(encoded_styles, normalize_key=lambda k: k.upper())
|
||||
assert sorted(decoded_styles.keys()) == sorted(['debug', 'warning', 'error', 'critical'])
|
||||
assert decoded_styles['debug']['color'] == 'green'
|
||||
assert decoded_styles['warning']['color'] == 'yellow'
|
||||
assert decoded_styles['error']['color'] == 'red'
|
||||
assert decoded_styles['critical']['color'] == 'red'
|
||||
assert decoded_styles['critical']['bold'] is True
|
||||
|
||||
def test_is_verbose(self):
|
||||
"""Make sure is_verbose() does what it should :-)."""
|
||||
set_level(logging.INFO)
|
||||
assert not is_verbose()
|
||||
set_level(logging.DEBUG)
|
||||
assert is_verbose()
|
||||
set_level(logging.VERBOSE)
|
||||
assert is_verbose()
|
||||
|
||||
def test_increase_verbosity(self):
|
||||
"""Make sure increase_verbosity() respects default and custom levels."""
|
||||
# Start from a known state.
|
||||
set_level(logging.INFO)
|
||||
assert get_level() == logging.INFO
|
||||
# INFO -> VERBOSE.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.VERBOSE
|
||||
# VERBOSE -> DEBUG.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.DEBUG
|
||||
# DEBUG -> SPAM.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.SPAM
|
||||
# SPAM -> NOTSET.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.NOTSET
|
||||
# NOTSET -> NOTSET.
|
||||
increase_verbosity()
|
||||
assert get_level() == logging.NOTSET
|
||||
|
||||
def test_decrease_verbosity(self):
|
||||
"""Make sure decrease_verbosity() respects default and custom levels."""
|
||||
# Start from a known state.
|
||||
set_level(logging.INFO)
|
||||
assert get_level() == logging.INFO
|
||||
# INFO -> NOTICE.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.NOTICE
|
||||
# NOTICE -> WARNING.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.WARNING
|
||||
# WARNING -> SUCCESS.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.SUCCESS
|
||||
# SUCCESS -> ERROR.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.ERROR
|
||||
# ERROR -> CRITICAL.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.CRITICAL
|
||||
# CRITICAL -> CRITICAL.
|
||||
decrease_verbosity()
|
||||
assert get_level() == logging.CRITICAL
|
||||
|
||||
def test_level_discovery(self):
|
||||
"""Make sure find_defined_levels() always reports the levels defined in Python's standard library."""
|
||||
defined_levels = find_defined_levels()
|
||||
level_values = defined_levels.values()
|
||||
for number in (0, 10, 20, 30, 40, 50):
|
||||
assert number in level_values
|
||||
|
||||
def test_walk_propagation_tree(self):
|
||||
"""Make sure walk_propagation_tree() properly walks the tree of loggers."""
|
||||
root, parent, child, grand_child = self.get_logger_tree()
|
||||
# Check the default mode of operation.
|
||||
loggers = list(walk_propagation_tree(grand_child))
|
||||
assert loggers == [grand_child, child, parent, root]
|
||||
# Now change the propagation (non-default mode of operation).
|
||||
child.propagate = False
|
||||
loggers = list(walk_propagation_tree(grand_child))
|
||||
assert loggers == [grand_child, child]
|
||||
|
||||
def test_find_handler(self):
|
||||
"""Make sure find_handler() works as intended."""
|
||||
root, parent, child, grand_child = self.get_logger_tree()
|
||||
# Add some handlers to the tree.
|
||||
stream_handler = logging.StreamHandler()
|
||||
syslog_handler = logging.handlers.SysLogHandler()
|
||||
child.addHandler(stream_handler)
|
||||
parent.addHandler(syslog_handler)
|
||||
# Make sure the first matching handler is returned.
|
||||
matched_handler, matched_logger = find_handler(grand_child, lambda h: isinstance(h, logging.Handler))
|
||||
assert matched_handler is stream_handler
|
||||
# Make sure the first matching handler of the given type is returned.
|
||||
matched_handler, matched_logger = find_handler(child, lambda h: isinstance(h, logging.handlers.SysLogHandler))
|
||||
assert matched_handler is syslog_handler
|
||||
|
||||
def get_logger_tree(self):
|
||||
"""Create and return a tree of loggers."""
|
||||
# Get the root logger.
|
||||
root = logging.getLogger()
|
||||
# Create a top level logger for ourselves.
|
||||
parent_name = random_string()
|
||||
parent = logging.getLogger(parent_name)
|
||||
# Create a child logger.
|
||||
child_name = '%s.%s' % (parent_name, random_string())
|
||||
child = logging.getLogger(child_name)
|
||||
# Create a grand child logger.
|
||||
grand_child_name = '%s.%s' % (child_name, random_string())
|
||||
grand_child = logging.getLogger(grand_child_name)
|
||||
return root, parent, child, grand_child
|
||||
|
||||
def test_support_for_milliseconds(self):
|
||||
"""Make sure milliseconds are hidden by default but can be easily enabled."""
|
||||
# Check that the default log format doesn't include milliseconds.
|
||||
stream = StringIO()
|
||||
install(reconfigure=True, stream=stream)
|
||||
logging.info("This should not include milliseconds.")
|
||||
assert all(map(PLAIN_TEXT_PATTERN.match, stream.getvalue().splitlines()))
|
||||
# Check that milliseconds can be enabled via a shortcut.
|
||||
stream = StringIO()
|
||||
install(milliseconds=True, reconfigure=True, stream=stream)
|
||||
logging.info("This should include milliseconds.")
|
||||
assert all(map(PATTERN_INCLUDING_MILLISECONDS.match, stream.getvalue().splitlines()))
|
||||
|
||||
def test_support_for_milliseconds_directive(self):
|
||||
"""Make sure milliseconds using the ``%f`` directive are supported."""
|
||||
stream = StringIO()
|
||||
install(reconfigure=True, stream=stream, datefmt='%Y-%m-%dT%H:%M:%S.%f%z')
|
||||
logging.info("This should be timestamped according to #45.")
|
||||
assert re.match(r'^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}[+-]\d{4}\s', stream.getvalue())
|
||||
|
||||
def test_plain_text_output_format(self):
|
||||
"""Inspect the plain text output of coloredlogs."""
|
||||
logger = VerboseLogger(random_string(25))
|
||||
stream = StringIO()
|
||||
install(level=logging.NOTSET, logger=logger, stream=stream)
|
||||
# Test that filtering on severity works.
|
||||
logger.setLevel(logging.INFO)
|
||||
logger.debug("No one should see this message.")
|
||||
assert len(stream.getvalue().strip()) == 0
|
||||
# Test that the default output format looks okay in plain text.
|
||||
logger.setLevel(logging.NOTSET)
|
||||
for method, severity in ((logger.debug, 'DEBUG'),
|
||||
(logger.info, 'INFO'),
|
||||
(logger.verbose, 'VERBOSE'),
|
||||
(logger.warning, 'WARNING'),
|
||||
(logger.error, 'ERROR'),
|
||||
(logger.critical, 'CRITICAL')):
|
||||
# XXX Workaround for a regression in Python 3.7 caused by the
|
||||
# Logger.isEnabledFor() method using stale cache entries. If we
|
||||
# don't clear the cache then logger.isEnabledFor(logging.DEBUG)
|
||||
# returns False and no DEBUG message is emitted.
|
||||
try:
|
||||
logger._cache.clear()
|
||||
except AttributeError:
|
||||
pass
|
||||
# Prepare the text.
|
||||
text = "This is a message with severity %r." % severity.lower()
|
||||
# Log the message with the given severity.
|
||||
method(text)
|
||||
# Get the line of output generated by the handler.
|
||||
output = stream.getvalue()
|
||||
lines = output.splitlines()
|
||||
last_line = lines[-1]
|
||||
assert text in last_line
|
||||
assert severity in last_line
|
||||
assert PLAIN_TEXT_PATTERN.match(last_line)
|
||||
|
||||
def test_dynamic_stderr_lookup(self):
|
||||
"""Make sure coloredlogs.install() uses StandardErrorHandler when possible."""
|
||||
coloredlogs.install()
|
||||
# Redirect sys.stderr to a temporary buffer.
|
||||
initial_stream = StringIO()
|
||||
initial_text = "Which stream will receive this text?"
|
||||
with PatchedAttribute(sys, 'stderr', initial_stream):
|
||||
logging.info(initial_text)
|
||||
assert initial_text in initial_stream.getvalue()
|
||||
# Redirect sys.stderr again, to a different destination.
|
||||
subsequent_stream = StringIO()
|
||||
subsequent_text = "And which stream will receive this other text?"
|
||||
with PatchedAttribute(sys, 'stderr', subsequent_stream):
|
||||
logging.info(subsequent_text)
|
||||
assert subsequent_text in subsequent_stream.getvalue()
|
||||
|
||||
def test_force_enable(self):
|
||||
"""Make sure ANSI escape sequences can be forced (bypassing auto-detection)."""
|
||||
interpreter = subprocess.Popen([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install(isatty=True)",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
], stderr=subprocess.PIPE)
|
||||
stdout, stderr = interpreter.communicate()
|
||||
assert ANSI_CSI in stderr.decode('UTF-8')
|
||||
|
||||
def test_auto_disable(self):
|
||||
"""
|
||||
Make sure ANSI escape sequences are not emitted when logging output is being redirected.
|
||||
|
||||
This is a regression test for https://github.com/xolox/python-coloredlogs/issues/100.
|
||||
|
||||
It works as follows:
|
||||
|
||||
1. We mock an interactive terminal using 'capturer' to ensure that this
|
||||
test works inside test drivers that capture output (like pytest).
|
||||
|
||||
2. We launch a subprocess (to ensure a clean process state) where
|
||||
stderr is captured but stdout is not, emulating issue #100.
|
||||
|
||||
3. The output captured on stderr contained ANSI escape sequences after
|
||||
this test was written and before the issue was fixed, so now this
|
||||
serves as a regression test for issue #100.
|
||||
"""
|
||||
with CaptureOutput():
|
||||
interpreter = subprocess.Popen([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install()",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
], stderr=subprocess.PIPE)
|
||||
stdout, stderr = interpreter.communicate()
|
||||
assert ANSI_CSI not in stderr.decode('UTF-8')
|
||||
|
||||
def test_env_disable(self):
|
||||
"""Make sure ANSI escape sequences can be disabled using ``$NO_COLOR``."""
|
||||
with PatchedItem(os.environ, 'NO_COLOR', 'I like monochrome'):
|
||||
with CaptureOutput() as capturer:
|
||||
subprocess.check_call([
|
||||
sys.executable, "-c", ";".join([
|
||||
"import coloredlogs, logging",
|
||||
"coloredlogs.install()",
|
||||
"logging.info('Hello world')",
|
||||
]),
|
||||
])
|
||||
output = capturer.get_text()
|
||||
assert ANSI_CSI not in output
|
||||
|
||||
def test_html_conversion(self):
|
||||
"""Check the conversion from ANSI escape sequences to HTML."""
|
||||
# Check conversion of colored text.
|
||||
for color_name, ansi_code in ANSI_COLOR_CODES.items():
|
||||
ansi_encoded_text = 'plain text followed by %s text' % ansi_wrap(color_name, color=color_name)
|
||||
expected_html = format(
|
||||
'<code>plain text followed by <span style="color:{css}">{name}</span> text</code>',
|
||||
css=EIGHT_COLOR_PALETTE[ansi_code], name=color_name,
|
||||
)
|
||||
self.assertEqual(expected_html, convert(ansi_encoded_text))
|
||||
# Check conversion of bright colored text.
|
||||
expected_html = '<code><span style="color:#FF0">bright yellow</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bright yellow', color='yellow', bright=True)))
|
||||
# Check conversion of text with a background color.
|
||||
expected_html = '<code><span style="background-color:#DE382B">red background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('red background', background='red')))
|
||||
# Check conversion of text with a bright background color.
|
||||
expected_html = '<code><span style="background-color:#F00">bright red background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bright red background', background='red', bright=True)))
|
||||
# Check conversion of text that uses the 256 color mode palette as a foreground color.
|
||||
expected_html = '<code><span style="color:#FFAF00">256 color mode foreground</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('256 color mode foreground', color=214)))
|
||||
# Check conversion of text that uses the 256 color mode palette as a background color.
|
||||
expected_html = '<code><span style="background-color:#AF0000">256 color mode background</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('256 color mode background', background=124)))
|
||||
# Check that invalid 256 color mode indexes don't raise exceptions.
|
||||
expected_html = '<code>plain text expected</code>'
|
||||
self.assertEqual(expected_html, convert('\x1b[38;5;256mplain text expected\x1b[0m'))
|
||||
# Check conversion of bold text.
|
||||
expected_html = '<code><span style="font-weight:bold">bold text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('bold text', bold=True)))
|
||||
# Check conversion of underlined text.
|
||||
expected_html = '<code><span style="text-decoration:underline">underlined text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('underlined text', underline=True)))
|
||||
# Check conversion of strike-through text.
|
||||
expected_html = '<code><span style="text-decoration:line-through">strike-through text</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('strike-through text', strike_through=True)))
|
||||
# Check conversion of inverse text.
|
||||
expected_html = '<code><span style="background-color:#FFC706;color:#000">inverse</span></code>'
|
||||
self.assertEqual(expected_html, convert(ansi_wrap('inverse', color='yellow', inverse=True)))
|
||||
# Check conversion of URLs.
|
||||
for sample_text in 'www.python.org', 'http://coloredlogs.rtfd.org', 'https://coloredlogs.rtfd.org':
|
||||
sample_url = sample_text if '://' in sample_text else ('http://' + sample_text)
|
||||
expected_html = '<code><a href="%s" style="color:inherit">%s</a></code>' % (sample_url, sample_text)
|
||||
self.assertEqual(expected_html, convert(sample_text))
|
||||
# Check that the capture pattern for URLs doesn't match ANSI escape
|
||||
# sequences and also check that the short hand for the 0 reset code is
|
||||
# supported. These are tests for regressions of bugs found in
|
||||
# coloredlogs <= 8.0.
|
||||
reset_short_hand = '\x1b[0m'
|
||||
blue_underlined = ansi_style(color='blue', underline=True)
|
||||
ansi_encoded_text = '<%shttps://coloredlogs.readthedocs.io%s>' % (blue_underlined, reset_short_hand)
|
||||
expected_html = (
|
||||
'<code><<span style="color:#006FB8;text-decoration:underline">'
|
||||
'<a href="https://coloredlogs.readthedocs.io" style="color:inherit">'
|
||||
'https://coloredlogs.readthedocs.io'
|
||||
'</a></span>></code>'
|
||||
)
|
||||
self.assertEqual(expected_html, convert(ansi_encoded_text))
|
||||
|
||||
def test_output_interception(self):
|
||||
"""Test capturing of output from external commands."""
|
||||
expected_output = 'testing, 1, 2, 3 ..'
|
||||
actual_output = capture(['echo', expected_output])
|
||||
assert actual_output.strip() == expected_output.strip()
|
||||
|
||||
def test_enable_colored_cron_mailer(self):
|
||||
"""Test that automatic ANSI to HTML conversion when running under ``cron`` can be enabled."""
|
||||
with PatchedItem(os.environ, 'CONTENT_TYPE', 'text/html'):
|
||||
with ColoredCronMailer() as mailer:
|
||||
assert mailer.is_enabled
|
||||
|
||||
def test_disable_colored_cron_mailer(self):
|
||||
"""Test that automatic ANSI to HTML conversion when running under ``cron`` can be disabled."""
|
||||
with PatchedItem(os.environ, 'CONTENT_TYPE', 'text/plain'):
|
||||
with ColoredCronMailer() as mailer:
|
||||
assert not mailer.is_enabled
|
||||
|
||||
def test_auto_install(self):
|
||||
"""Test :func:`coloredlogs.auto_install()`."""
|
||||
needle = random_string()
|
||||
command_line = [sys.executable, '-c', 'import logging; logging.info(%r)' % needle]
|
||||
# Sanity check that log messages aren't enabled by default.
|
||||
with CaptureOutput() as capturer:
|
||||
os.environ['COLOREDLOGS_AUTO_INSTALL'] = 'false'
|
||||
subprocess.check_call(command_line)
|
||||
output = capturer.get_text()
|
||||
assert needle not in output
|
||||
# Test that the $COLOREDLOGS_AUTO_INSTALL environment variable can be
|
||||
# used to automatically call coloredlogs.install() during initialization.
|
||||
with CaptureOutput() as capturer:
|
||||
os.environ['COLOREDLOGS_AUTO_INSTALL'] = 'true'
|
||||
subprocess.check_call(command_line)
|
||||
output = capturer.get_text()
|
||||
assert needle in output
|
||||
|
||||
def test_cli_demo(self):
|
||||
"""Test the command line colored logging demonstration."""
|
||||
with CaptureOutput() as capturer:
|
||||
main('coloredlogs', '--demo')
|
||||
output = capturer.get_text()
|
||||
# Make sure the output contains all of the expected logging level names.
|
||||
for name in 'debug', 'info', 'warning', 'error', 'critical':
|
||||
assert name.upper() in output
|
||||
|
||||
def test_cli_conversion(self):
|
||||
"""Test the command line HTML conversion."""
|
||||
output = main('coloredlogs', '--convert', 'coloredlogs', '--demo', capture=True)
|
||||
# Make sure the output is encoded as HTML.
|
||||
assert '<span' in output
|
||||
|
||||
def test_empty_conversion(self):
|
||||
"""
|
||||
Test that conversion of empty output produces no HTML.
|
||||
|
||||
This test was added because I found that ``coloredlogs --convert`` when
|
||||
used in a cron job could cause cron to send out what appeared to be
|
||||
empty emails. On more careful inspection the body of those emails was
|
||||
``<code></code>``. By not emitting the wrapper element when no other
|
||||
HTML is generated, cron will not send out an email.
|
||||
"""
|
||||
output = main('coloredlogs', '--convert', 'true', capture=True)
|
||||
assert not output.strip()
|
||||
|
||||
def test_implicit_usage_message(self):
|
||||
"""Test that the usage message is shown when no actions are given."""
|
||||
assert 'Usage:' in main('coloredlogs', capture=True)
|
||||
|
||||
def test_explicit_usage_message(self):
|
||||
"""Test that the usage message is shown when ``--help`` is given."""
|
||||
assert 'Usage:' in main('coloredlogs', '--help', capture=True)
|
||||
|
||||
def test_custom_record_factory(self):
|
||||
"""
|
||||
Test that custom LogRecord factories are supported.
|
||||
|
||||
This test is a bit convoluted because the logging module suppresses
|
||||
exceptions. We monkey patch the method suspected of encountering
|
||||
exceptions so that we can tell after it was called whether any
|
||||
exceptions occurred (despite the exceptions not propagating).
|
||||
"""
|
||||
if not hasattr(logging, 'getLogRecordFactory'):
|
||||
return self.skipTest("this test requires Python >= 3.2")
|
||||
|
||||
exceptions = []
|
||||
original_method = ColoredFormatter.format
|
||||
original_factory = logging.getLogRecordFactory()
|
||||
|
||||
def custom_factory(*args, **kwargs):
|
||||
record = original_factory(*args, **kwargs)
|
||||
record.custom_attribute = 0xdecafbad
|
||||
return record
|
||||
|
||||
def custom_method(*args, **kw):
|
||||
try:
|
||||
return original_method(*args, **kw)
|
||||
except Exception as e:
|
||||
exceptions.append(e)
|
||||
raise
|
||||
|
||||
with PatchedAttribute(ColoredFormatter, 'format', custom_method):
|
||||
logging.setLogRecordFactory(custom_factory)
|
||||
try:
|
||||
demonstrate_colored_logging()
|
||||
finally:
|
||||
logging.setLogRecordFactory(original_factory)
|
||||
|
||||
# Ensure that no exceptions were triggered.
|
||||
assert not exceptions
|
||||
|
||||
|
||||
def check_contents(filename, contents, match):
|
||||
"""Check if a line in a file contains an expected string."""
|
||||
with open(filename) as handle:
|
||||
assert any(contents in line for line in handle) == match
|
||||
|
||||
|
||||
def main(*arguments, **options):
|
||||
"""Wrap the command line interface to make it easier to test."""
|
||||
capture = options.get('capture', False)
|
||||
saved_argv = sys.argv
|
||||
saved_stdout = sys.stdout
|
||||
try:
|
||||
sys.argv = arguments
|
||||
if capture:
|
||||
sys.stdout = StringIO()
|
||||
coloredlogs.cli.main()
|
||||
if capture:
|
||||
return sys.stdout.getvalue()
|
||||
finally:
|
||||
sys.argv = saved_argv
|
||||
sys.stdout = saved_stdout
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def cleanup_handlers():
|
||||
"""Context manager to cleanup output handlers."""
|
||||
# There's nothing to set up so we immediately yield control.
|
||||
yield
|
||||
# After the with block ends we cleanup any output handlers.
|
||||
for match_func in match_stream_handler, match_syslog_handler:
|
||||
handler, logger = find_handler(logging.getLogger(), match_func)
|
||||
if handler and logger:
|
||||
logger.removeHandler(handler)
|
||||
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,27 @@
|
||||
Metadata-Version: 2.4
|
||||
Name: flatbuffers
|
||||
Version: 25.12.19
|
||||
Summary: The FlatBuffers serialization format for Python
|
||||
Home-page: https://google.github.io/flatbuffers/
|
||||
Author: Derek Bailey
|
||||
Author-email: derekbailey@google.com
|
||||
License: Apache 2.0
|
||||
Project-URL: Documentation, https://google.github.io/flatbuffers/
|
||||
Project-URL: Source, https://github.com/google/flatbuffers
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: Apache Software License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Dynamic: author
|
||||
Dynamic: author-email
|
||||
Dynamic: classifier
|
||||
Dynamic: description
|
||||
Dynamic: home-page
|
||||
Dynamic: license
|
||||
Dynamic: project-url
|
||||
Dynamic: summary
|
||||
|
||||
Python runtime library for use with the `Flatbuffers <https://google.github.io/flatbuffers/>`_ serialization format.
|
||||
@@ -0,0 +1,25 @@
|
||||
flatbuffers-25.12.19.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
flatbuffers-25.12.19.dist-info/METADATA,sha256=nzgjn5b6ohUipqratpEuHlYHCLvpyfaWoGeclAP6CJ4,1004
|
||||
flatbuffers-25.12.19.dist-info/RECORD,,
|
||||
flatbuffers-25.12.19.dist-info/WHEEL,sha256=JNWh1Fm1UdwIQV075glCn4MVuCRs0sotJIq-J6rbxCU,109
|
||||
flatbuffers-25.12.19.dist-info/top_level.txt,sha256=UXVWLA8ys6HeqTz6rfKesocUq6ln-ZL8mhZC_cq5BEc,12
|
||||
flatbuffers/__init__.py,sha256=vJZrqZOOTKdBNMa_iTKUA6WJG_c_NzKGpFXOe1Igtiw,751
|
||||
flatbuffers/__pycache__/__init__.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/_version.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/builder.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/compat.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/encode.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/flexbuffers.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/number_types.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/packer.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/table.cpython-312.pyc,,
|
||||
flatbuffers/__pycache__/util.cpython-312.pyc,,
|
||||
flatbuffers/_version.py,sha256=sk31rbYWDseoJxI9YQ1fYR-MMxRATyqsntKAEv7D7pI,696
|
||||
flatbuffers/builder.py,sha256=HrG5KJ9rasiSTrMGeatkSdDs7fXV5fy_927Dsgakp4A,24567
|
||||
flatbuffers/compat.py,sha256=ihBSpWDCSL-vgLSyZtcu8LX3ZI3wz9LhtqItY2GQZgg,2373
|
||||
flatbuffers/encode.py,sha256=2Or3mgWRAkJiWg-GgYasDU4zIHpQU3W06fmIhwbz5uM,1550
|
||||
flatbuffers/flexbuffers.py,sha256=yF8Wr4Lo8WJb-pj9NNaIYxLwzlHHyTroM0iO8fyDwbU,44454
|
||||
flatbuffers/number_types.py,sha256=ijO0QcJiuxlQegoBOed0v9m0DdzTZHWxpTBZUqzsWHA,3762
|
||||
flatbuffers/packer.py,sha256=LNWym8YgFRqHjcPeGpYY3inCGWH6XnbkQKtAPtFEVas,1164
|
||||
flatbuffers/table.py,sha256=ciYTmq_CzAuYpb3KAVnl75M84ieChfbyKne-dFHzwwU,4818
|
||||
flatbuffers/util.py,sha256=mRVQ1VoHp0MJMNtRTUGVzALwN4T_C-U14tMbj99py2A,1608
|
||||
@@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: setuptools (80.9.0)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
flatbuffers
|
||||
@@ -0,0 +1,19 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import util
|
||||
from ._version import __version__
|
||||
from .builder import Builder
|
||||
from .compat import range_func as compat_range
|
||||
from .table import Table
|
||||
@@ -0,0 +1,17 @@
|
||||
# Copyright 2019 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Placeholder, to be updated during the release process
|
||||
# by the setup.py
|
||||
__version__ = "25.12.19"
|
||||
@@ -0,0 +1,858 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import warnings
|
||||
|
||||
from . import compat
|
||||
from . import encode
|
||||
from . import number_types as N
|
||||
from . import packer
|
||||
from .compat import memoryview_type
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
from .compat import range_func
|
||||
from .number_types import (SOffsetTFlags, UOffsetTFlags, VOffsetTFlags)
|
||||
|
||||
np = import_numpy()
|
||||
## @file
|
||||
## @addtogroup flatbuffers_python_api
|
||||
## @{
|
||||
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
class OffsetArithmeticError(RuntimeError):
|
||||
"""Error caused by an Offset arithmetic error.
|
||||
|
||||
Probably caused by bad writing of fields. This is considered an unreachable
|
||||
situation in normal circumstances.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IsNotNestedError(RuntimeError):
|
||||
"""Error caused by using a Builder to write Object data when not inside
|
||||
|
||||
an Object.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IsNestedError(RuntimeError):
|
||||
"""Error caused by using a Builder to begin an Object when an Object is
|
||||
|
||||
already being built.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class StructIsNotInlineError(RuntimeError):
|
||||
"""Error caused by using a Builder to write a Struct at a location that
|
||||
|
||||
is not the current Offset.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class BuilderSizeError(RuntimeError):
|
||||
"""Error caused by causing a Builder to exceed the hardcoded limit of 2
|
||||
|
||||
gigabytes.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class BuilderNotFinishedError(RuntimeError):
|
||||
"""Error caused by not calling `Finish` before calling `Output`."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class EndVectorLengthMismatched(RuntimeError):
|
||||
"""The number of elements passed to EndVector does not match the number
|
||||
|
||||
specified in StartVector.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
# VtableMetadataFields is the count of metadata fields in each vtable.
|
||||
VtableMetadataFields = 2
|
||||
## @endcond
|
||||
|
||||
|
||||
class Builder(object):
|
||||
"""A Builder is used to construct one or more FlatBuffers.
|
||||
|
||||
Typically, Builder objects will be used from code generated by the `flatc`
|
||||
compiler.
|
||||
|
||||
A Builder constructs byte buffers in a last-first manner for simplicity and
|
||||
performance during reading.
|
||||
|
||||
Internally, a Builder is a state machine for creating FlatBuffer objects.
|
||||
|
||||
It holds the following internal state:
|
||||
- Bytes: an array of bytes.
|
||||
- current_vtable: a list of integers.
|
||||
- vtables: a hash of vtable entries.
|
||||
|
||||
Attributes:
|
||||
Bytes: The internal `bytearray` for the Builder.
|
||||
finished: A boolean determining if the Builder has been finalized.
|
||||
"""
|
||||
|
||||
## @cond FLATBUFFERS_INTENRAL
|
||||
__slots__ = (
|
||||
"Bytes",
|
||||
"current_vtable",
|
||||
"head",
|
||||
"minalign",
|
||||
"objectEnd",
|
||||
"vtables",
|
||||
"nested",
|
||||
"forceDefaults",
|
||||
"finished",
|
||||
"vectorNumElems",
|
||||
"sharedStrings",
|
||||
)
|
||||
|
||||
"""Maximum buffer size constant, in bytes.
|
||||
|
||||
Builder will never allow it's buffer grow over this size.
|
||||
Currently equals 2Gb.
|
||||
"""
|
||||
MAX_BUFFER_SIZE = 2**31
|
||||
## @endcond
|
||||
|
||||
def __init__(self, initialSize=1024):
|
||||
"""Initializes a Builder of size `initial_size`.
|
||||
|
||||
The internal buffer is grown as needed.
|
||||
"""
|
||||
|
||||
if not (0 <= initialSize <= Builder.MAX_BUFFER_SIZE):
|
||||
msg = "flatbuffers: Cannot create Builder larger than 2 gigabytes."
|
||||
raise BuilderSizeError(msg)
|
||||
|
||||
self.Bytes = bytearray(initialSize)
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.current_vtable = None
|
||||
self.head = UOffsetTFlags.py_type(initialSize)
|
||||
self.minalign = 1
|
||||
self.objectEnd = None
|
||||
self.vtables = {}
|
||||
self.nested = False
|
||||
self.forceDefaults = False
|
||||
self.sharedStrings = None
|
||||
## @endcond
|
||||
self.finished = False
|
||||
|
||||
def Clear(self):
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.current_vtable = None
|
||||
self.head = len(self.Bytes)
|
||||
self.minalign = 1
|
||||
self.objectEnd = None
|
||||
self.vtables = {}
|
||||
self.nested = False
|
||||
self.forceDefaults = False
|
||||
self.sharedStrings = None
|
||||
self.vectorNumElems = None
|
||||
## @endcond
|
||||
self.finished = False
|
||||
|
||||
def Output(self):
|
||||
"""Return the portion of the buffer that has been used for writing data.
|
||||
|
||||
This is the typical way to access the FlatBuffer data inside the
|
||||
builder. If you try to access `Builder.Bytes` directly, you would need
|
||||
to manually index it with `Head()`, since the buffer is constructed
|
||||
backwards.
|
||||
|
||||
It raises BuilderNotFinishedError if the buffer has not been finished
|
||||
with `Finish`.
|
||||
"""
|
||||
|
||||
if not self.finished:
|
||||
raise BuilderNotFinishedError()
|
||||
|
||||
return self.Bytes[self.head :]
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def StartObject(self, numfields):
|
||||
"""StartObject initializes bookkeeping for writing a new object."""
|
||||
|
||||
self.assertNotNested()
|
||||
|
||||
# use 32-bit offsets so that arithmetic doesn't overflow.
|
||||
self.current_vtable = [0] * numfields
|
||||
self.objectEnd = self.Offset()
|
||||
self.nested = True
|
||||
|
||||
def WriteVtable(self):
|
||||
"""WriteVtable serializes the vtable for the current object, if needed.
|
||||
|
||||
Before writing out the vtable, this checks pre-existing vtables for
|
||||
equality to this one. If an equal vtable is found, point the object to
|
||||
the existing vtable and return.
|
||||
|
||||
Because vtable values are sensitive to alignment of object data, not
|
||||
all logically-equal vtables will be deduplicated.
|
||||
|
||||
A vtable has the following format:
|
||||
<VOffsetT: size of the vtable in bytes, including this value>
|
||||
<VOffsetT: size of the object in bytes, including the vtable offset>
|
||||
<VOffsetT: offset for a field> * N, where N is the number of fields
|
||||
in the schema for this type. Includes deprecated fields.
|
||||
Thus, a vtable is made of 2 + N elements, each VOffsetT bytes wide.
|
||||
|
||||
An object has the following format:
|
||||
<SOffsetT: offset to this object's vtable (may be negative)>
|
||||
<byte: data>+
|
||||
"""
|
||||
|
||||
# Prepend a zero scalar to the object. Later in this function we'll
|
||||
# write an offset here that points to the object's vtable:
|
||||
self.PrependSOffsetTRelative(0)
|
||||
|
||||
objectOffset = self.Offset()
|
||||
|
||||
vtKey = []
|
||||
trim = True
|
||||
for elem in reversed(self.current_vtable):
|
||||
if elem == 0:
|
||||
if trim:
|
||||
continue
|
||||
else:
|
||||
elem = objectOffset - elem
|
||||
trim = False
|
||||
|
||||
vtKey.append(elem)
|
||||
|
||||
objectSize = UOffsetTFlags.py_type(objectOffset - self.objectEnd)
|
||||
vtKey.append(objectSize)
|
||||
vtKey = tuple(vtKey)
|
||||
# calculate the size of the object
|
||||
vt2Offset = self.vtables.get(vtKey)
|
||||
if vt2Offset is None:
|
||||
# Did not find a vtable, so write this one to the buffer.
|
||||
|
||||
# Write out the current vtable in reverse , because
|
||||
# serialization occurs in last-first order:
|
||||
i = len(self.current_vtable) - 1
|
||||
trailing = 0
|
||||
trim = True
|
||||
while i >= 0:
|
||||
off = 0
|
||||
elem = self.current_vtable[i]
|
||||
i -= 1
|
||||
|
||||
if elem == 0:
|
||||
if trim:
|
||||
trailing += 1
|
||||
continue
|
||||
else:
|
||||
# Forward reference to field;
|
||||
# use 32bit number to ensure no overflow:
|
||||
off = objectOffset - elem
|
||||
trim = False
|
||||
|
||||
self.PrependVOffsetT(off)
|
||||
|
||||
# The two metadata fields are written last.
|
||||
|
||||
# First, store the object bytesize:
|
||||
self.PrependVOffsetT(VOffsetTFlags.py_type(objectSize))
|
||||
|
||||
# Second, store the vtable bytesize:
|
||||
vBytes = len(self.current_vtable) - trailing + VtableMetadataFields
|
||||
vBytes *= N.VOffsetTFlags.bytewidth
|
||||
self.PrependVOffsetT(VOffsetTFlags.py_type(vBytes))
|
||||
|
||||
# Next, write the offset to the new vtable in the
|
||||
# already-allocated SOffsetT at the beginning of this object:
|
||||
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
|
||||
encode.Write(
|
||||
packer.soffset,
|
||||
self.Bytes,
|
||||
objectStart,
|
||||
SOffsetTFlags.py_type(self.Offset() - objectOffset),
|
||||
)
|
||||
|
||||
# Finally, store this vtable in memory for future
|
||||
# deduplication:
|
||||
self.vtables[vtKey] = self.Offset()
|
||||
else:
|
||||
# Found a duplicate vtable.
|
||||
objectStart = SOffsetTFlags.py_type(len(self.Bytes) - objectOffset)
|
||||
self.head = UOffsetTFlags.py_type(objectStart)
|
||||
|
||||
# Write the offset to the found vtable in the
|
||||
# already-allocated SOffsetT at the beginning of this object:
|
||||
encode.Write(
|
||||
packer.soffset,
|
||||
self.Bytes,
|
||||
self.Head(),
|
||||
SOffsetTFlags.py_type(vt2Offset - objectOffset),
|
||||
)
|
||||
|
||||
self.current_vtable = None
|
||||
return objectOffset
|
||||
|
||||
def EndObject(self):
|
||||
"""EndObject writes data necessary to finish object construction."""
|
||||
self.assertNested()
|
||||
self.nested = False
|
||||
return self.WriteVtable()
|
||||
|
||||
def GrowByteBuffer(self):
|
||||
"""Doubles the size of the byteslice, and copies the old data towards
|
||||
|
||||
the end of the new buffer (since we build the buffer backwards).
|
||||
"""
|
||||
if len(self.Bytes) == Builder.MAX_BUFFER_SIZE:
|
||||
msg = "flatbuffers: cannot grow buffer beyond 2 gigabytes"
|
||||
raise BuilderSizeError(msg)
|
||||
|
||||
newSize = min(len(self.Bytes) * 2, Builder.MAX_BUFFER_SIZE)
|
||||
if newSize == 0:
|
||||
newSize = 1
|
||||
bytes2 = bytearray(newSize)
|
||||
bytes2[newSize - len(self.Bytes) :] = self.Bytes
|
||||
self.Bytes = bytes2
|
||||
|
||||
## @endcond
|
||||
|
||||
def Head(self):
|
||||
"""Get the start of useful data in the underlying byte buffer.
|
||||
|
||||
Note: unlike other functions, this value is interpreted as from the
|
||||
left.
|
||||
"""
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
return self.head
|
||||
## @endcond
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def Offset(self):
|
||||
"""Offset relative to the end of the buffer."""
|
||||
return len(self.Bytes) - self.head
|
||||
|
||||
def Pad(self, n):
|
||||
"""Pad places zeros at the current offset."""
|
||||
if n <= 0:
|
||||
return
|
||||
new_head = self.head - n
|
||||
self.Bytes[new_head : self.head] = b"\x00" * n
|
||||
self.head = new_head
|
||||
|
||||
def Prep(self, size, additionalBytes):
|
||||
"""Prep prepares to write an element of `size` after `additional_bytes`
|
||||
|
||||
have been written, e.g. if you write a string, you need to align
|
||||
such the int length field is aligned to SizeInt32, and the string
|
||||
data follows it directly.
|
||||
If all you need to do is align, `additionalBytes` will be 0.
|
||||
"""
|
||||
|
||||
# Track the biggest thing we've ever aligned to.
|
||||
if size > self.minalign:
|
||||
self.minalign = size
|
||||
|
||||
# Find the amount of alignment needed such that `size` is properly
|
||||
# aligned after `additionalBytes`:
|
||||
head = self.head
|
||||
buf_len = len(self.Bytes)
|
||||
alignSize = (~(buf_len - head + additionalBytes)) + 1
|
||||
alignSize &= size - 1
|
||||
|
||||
# Reallocate the buffer if needed:
|
||||
needed = alignSize + size + additionalBytes
|
||||
while head < needed:
|
||||
oldBufSize = buf_len
|
||||
self.GrowByteBuffer()
|
||||
buf_len = len(self.Bytes)
|
||||
head += buf_len - oldBufSize
|
||||
self.head = head
|
||||
self.Pad(alignSize)
|
||||
|
||||
def PrependSOffsetTRelative(self, off):
|
||||
"""PrependSOffsetTRelative prepends an SOffsetT, relative to where it
|
||||
|
||||
will be written.
|
||||
"""
|
||||
|
||||
# Ensure alignment is already done:
|
||||
self.Prep(N.SOffsetTFlags.bytewidth, 0)
|
||||
if not (off <= self.Offset()):
|
||||
msg = "flatbuffers: Offset arithmetic error."
|
||||
raise OffsetArithmeticError(msg)
|
||||
off2 = self.Offset() - off + N.SOffsetTFlags.bytewidth
|
||||
self.PlaceSOffsetT(off2)
|
||||
|
||||
## @endcond
|
||||
|
||||
def PrependUOffsetTRelative(self, off):
|
||||
"""Prepends an unsigned offset into vector data, relative to where it
|
||||
|
||||
will be written.
|
||||
"""
|
||||
|
||||
# Ensure alignment is already done:
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, 0)
|
||||
if not (off <= self.Offset()):
|
||||
msg = "flatbuffers: Offset arithmetic error."
|
||||
raise OffsetArithmeticError(msg)
|
||||
off2 = self.Offset() - off + N.UOffsetTFlags.bytewidth
|
||||
self.PlaceUOffsetT(off2)
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def StartVector(self, elemSize, numElems, alignment):
|
||||
"""StartVector initializes bookkeeping for writing a new vector.
|
||||
|
||||
A vector has the following format:
|
||||
- <UOffsetT: number of elements in this vector>
|
||||
- <T: data>+, where T is the type of elements of this vector.
|
||||
"""
|
||||
|
||||
self.assertNotNested()
|
||||
self.nested = True
|
||||
self.vectorNumElems = numElems
|
||||
self.Prep(N.Uint32Flags.bytewidth, elemSize * numElems)
|
||||
self.Prep(alignment, elemSize * numElems) # In case alignment > int.
|
||||
return self.Offset()
|
||||
|
||||
## @endcond
|
||||
|
||||
def EndVector(self, numElems=None):
|
||||
"""EndVector writes data necessary to finish vector construction."""
|
||||
|
||||
self.assertNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = False
|
||||
## @endcond
|
||||
|
||||
if numElems:
|
||||
warnings.warn("numElems is deprecated.", DeprecationWarning, stacklevel=2)
|
||||
if numElems != self.vectorNumElems:
|
||||
raise EndVectorLengthMismatched()
|
||||
|
||||
# we already made space for this, so write without PrependUint32
|
||||
self.PlaceUOffsetT(self.vectorNumElems)
|
||||
self.vectorNumElems = None
|
||||
return self.Offset()
|
||||
|
||||
def CreateSharedString(self, s, encoding="utf-8", errors="strict"):
|
||||
"""CreateSharedString checks if the string is already written to the buffer
|
||||
|
||||
before calling CreateString.
|
||||
"""
|
||||
|
||||
if not self.sharedStrings:
|
||||
self.sharedStrings = {}
|
||||
elif s in self.sharedStrings:
|
||||
return self.sharedStrings[s]
|
||||
|
||||
off = self.CreateString(s, encoding, errors)
|
||||
self.sharedStrings[s] = off
|
||||
|
||||
return off
|
||||
|
||||
def CreateString(self, s, encoding="utf-8", errors="strict"):
|
||||
"""CreateString writes a null-terminated byte string as a vector."""
|
||||
|
||||
self.assertNotNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = True
|
||||
## @endcond
|
||||
|
||||
if isinstance(s, compat.string_types):
|
||||
x = s.encode(encoding, errors)
|
||||
elif isinstance(s, compat.binary_types):
|
||||
x = s
|
||||
else:
|
||||
raise TypeError("non-string passed to CreateString")
|
||||
|
||||
payload_len = len(x)
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, (payload_len + 1) * N.Uint8Flags.bytewidth)
|
||||
self.Place(0, N.Uint8Flags)
|
||||
|
||||
new_head = self.head - payload_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + payload_len] = x
|
||||
|
||||
self.vectorNumElems = payload_len
|
||||
return self.EndVector()
|
||||
|
||||
def CreateByteVector(self, x):
|
||||
"""CreateString writes a byte vector."""
|
||||
|
||||
self.assertNotNested()
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
self.nested = True
|
||||
## @endcond
|
||||
|
||||
if not isinstance(x, compat.binary_types):
|
||||
raise TypeError("non-byte vector passed to CreateByteVector")
|
||||
|
||||
data_len = len(x)
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, data_len * N.Uint8Flags.bytewidth)
|
||||
new_head = self.head - data_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + data_len] = x
|
||||
|
||||
self.vectorNumElems = data_len
|
||||
return self.EndVector()
|
||||
|
||||
def CreateNumpyVector(self, x):
|
||||
"""CreateNumpyVector writes a numpy array into the buffer."""
|
||||
|
||||
if np is None:
|
||||
# Numpy is required for this feature
|
||||
raise NumpyRequiredForThisFeature("Numpy was not found.")
|
||||
|
||||
if not isinstance(x, np.ndarray):
|
||||
raise TypeError("non-numpy-ndarray passed to CreateNumpyVector")
|
||||
|
||||
if x.dtype.kind not in ["b", "i", "u", "f"]:
|
||||
raise TypeError("numpy-ndarray holds elements of unsupported datatype")
|
||||
|
||||
if x.ndim > 1:
|
||||
raise TypeError("multidimensional-ndarray passed to CreateNumpyVector")
|
||||
|
||||
self.StartVector(x.itemsize, x.size, x.dtype.alignment)
|
||||
|
||||
# Ensure little endian byte ordering
|
||||
if x.dtype.str[0] == "<":
|
||||
x_lend = x
|
||||
else:
|
||||
x_lend = x.byteswap(inplace=False)
|
||||
|
||||
# tobytes ensures c_contiguous ordering
|
||||
payload = x_lend.tobytes(order="C")
|
||||
|
||||
# Calculate total length
|
||||
payload_len = len(payload)
|
||||
new_head = self.head - payload_len
|
||||
self.head = new_head
|
||||
self.Bytes[new_head : new_head + payload_len] = payload
|
||||
|
||||
self.vectorNumElems = x.size
|
||||
return self.EndVector()
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def assertNested(self):
|
||||
"""Check that we are in the process of building an object."""
|
||||
|
||||
if not self.nested:
|
||||
raise IsNotNestedError()
|
||||
|
||||
def assertNotNested(self):
|
||||
"""Check that no other objects are being built while making this object.
|
||||
|
||||
If not, raise an exception.
|
||||
"""
|
||||
|
||||
if self.nested:
|
||||
raise IsNestedError()
|
||||
|
||||
def assertStructIsInline(self, obj):
|
||||
"""Structs are always stored inline, so need to be created right
|
||||
|
||||
where they are used. You'll get this error if you created it
|
||||
elsewhere.
|
||||
"""
|
||||
|
||||
N.enforce_number(obj, N.UOffsetTFlags)
|
||||
if obj != self.Offset():
|
||||
msg = (
|
||||
"flatbuffers: Tried to write a Struct at an Offset that "
|
||||
"is different from the current Offset of the Builder."
|
||||
)
|
||||
raise StructIsNotInlineError(msg)
|
||||
|
||||
def Slot(self, slotnum):
|
||||
"""Slot sets the vtable key `voffset` to the current location in the
|
||||
|
||||
buffer.
|
||||
"""
|
||||
self.assertNested()
|
||||
self.current_vtable[slotnum] = self.Offset()
|
||||
|
||||
## @endcond
|
||||
|
||||
def __Finish(self, rootTable, sizePrefix, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
|
||||
N.enforce_number(rootTable, N.UOffsetTFlags)
|
||||
|
||||
prepSize = N.UOffsetTFlags.bytewidth
|
||||
if file_identifier is not None:
|
||||
prepSize += N.Int32Flags.bytewidth
|
||||
if sizePrefix:
|
||||
prepSize += N.Int32Flags.bytewidth
|
||||
self.Prep(self.minalign, prepSize)
|
||||
|
||||
if file_identifier is not None:
|
||||
self.Prep(N.UOffsetTFlags.bytewidth, encode.FILE_IDENTIFIER_LENGTH)
|
||||
|
||||
# Convert bytes object file_identifier to an array of 4 8-bit integers,
|
||||
# and use big-endian to enforce size compliance.
|
||||
# https://docs.python.org/2/library/struct.html#format-characters
|
||||
file_identifier = N.struct.unpack(">BBBB", file_identifier)
|
||||
for i in range(encode.FILE_IDENTIFIER_LENGTH - 1, -1, -1):
|
||||
# Place the bytes of the file_identifer in reverse order:
|
||||
self.Place(file_identifier[i], N.Uint8Flags)
|
||||
|
||||
self.PrependUOffsetTRelative(rootTable)
|
||||
if sizePrefix:
|
||||
size = len(self.Bytes) - self.head
|
||||
N.enforce_number(size, N.Int32Flags)
|
||||
self.PrependInt32(size)
|
||||
self.finished = True
|
||||
return self.head
|
||||
|
||||
def Finish(self, rootTable, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`."""
|
||||
return self.__Finish(rootTable, False, file_identifier=file_identifier)
|
||||
|
||||
def FinishSizePrefixed(self, rootTable, file_identifier=None):
|
||||
"""Finish finalizes a buffer, pointing to the given `rootTable`,
|
||||
|
||||
with the size prefixed.
|
||||
"""
|
||||
return self.__Finish(rootTable, True, file_identifier=file_identifier)
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def Prepend(self, flags, off):
|
||||
self.Prep(flags.bytewidth, 0)
|
||||
self.Place(off, flags)
|
||||
|
||||
def PrependSlot(self, flags, o, x, d):
|
||||
if x is not None:
|
||||
N.enforce_number(x, flags)
|
||||
if d is not None:
|
||||
N.enforce_number(d, flags)
|
||||
if x != d or (self.forceDefaults and d is not None):
|
||||
self.Prepend(flags, x)
|
||||
self.Slot(o)
|
||||
|
||||
def PrependBoolSlot(self, *args):
|
||||
self.PrependSlot(N.BoolFlags, *args)
|
||||
|
||||
def PrependByteSlot(self, *args):
|
||||
self.PrependSlot(N.Uint8Flags, *args)
|
||||
|
||||
def PrependUint8Slot(self, *args):
|
||||
self.PrependSlot(N.Uint8Flags, *args)
|
||||
|
||||
def PrependUint16Slot(self, *args):
|
||||
self.PrependSlot(N.Uint16Flags, *args)
|
||||
|
||||
def PrependUint32Slot(self, *args):
|
||||
self.PrependSlot(N.Uint32Flags, *args)
|
||||
|
||||
def PrependUint64Slot(self, *args):
|
||||
self.PrependSlot(N.Uint64Flags, *args)
|
||||
|
||||
def PrependInt8Slot(self, *args):
|
||||
self.PrependSlot(N.Int8Flags, *args)
|
||||
|
||||
def PrependInt16Slot(self, *args):
|
||||
self.PrependSlot(N.Int16Flags, *args)
|
||||
|
||||
def PrependInt32Slot(self, *args):
|
||||
self.PrependSlot(N.Int32Flags, *args)
|
||||
|
||||
def PrependInt64Slot(self, *args):
|
||||
self.PrependSlot(N.Int64Flags, *args)
|
||||
|
||||
def PrependFloat32Slot(self, *args):
|
||||
self.PrependSlot(N.Float32Flags, *args)
|
||||
|
||||
def PrependFloat64Slot(self, *args):
|
||||
self.PrependSlot(N.Float64Flags, *args)
|
||||
|
||||
def PrependUOffsetTRelativeSlot(self, o, x, d):
|
||||
"""PrependUOffsetTRelativeSlot prepends an UOffsetT onto the object at
|
||||
|
||||
vtable slot `o`. If value `x` equals default `d`, then the slot will
|
||||
be set to zero and no other data will be written.
|
||||
"""
|
||||
|
||||
if x != d or self.forceDefaults:
|
||||
self.PrependUOffsetTRelative(x)
|
||||
self.Slot(o)
|
||||
|
||||
def PrependStructSlot(self, v, x, d):
|
||||
"""PrependStructSlot prepends a struct onto the object at vtable slot `o`.
|
||||
|
||||
Structs are stored inline, so nothing additional is being added. In
|
||||
generated code, `d` is always 0.
|
||||
"""
|
||||
|
||||
N.enforce_number(d, N.UOffsetTFlags)
|
||||
if x != d:
|
||||
self.assertStructIsInline(x)
|
||||
self.Slot(v)
|
||||
|
||||
## @endcond
|
||||
|
||||
def PrependBool(self, x):
|
||||
"""Prepend a `bool` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.BoolFlags, x)
|
||||
|
||||
def PrependByte(self, x):
|
||||
"""Prepend a `byte` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint8Flags, x)
|
||||
|
||||
def PrependUint8(self, x):
|
||||
"""Prepend an `uint8` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint8Flags, x)
|
||||
|
||||
def PrependUint16(self, x):
|
||||
"""Prepend an `uint16` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint16Flags, x)
|
||||
|
||||
def PrependUint32(self, x):
|
||||
"""Prepend an `uint32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint32Flags, x)
|
||||
|
||||
def PrependUint64(self, x):
|
||||
"""Prepend an `uint64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Uint64Flags, x)
|
||||
|
||||
def PrependInt8(self, x):
|
||||
"""Prepend an `int8` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int8Flags, x)
|
||||
|
||||
def PrependInt16(self, x):
|
||||
"""Prepend an `int16` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int16Flags, x)
|
||||
|
||||
def PrependInt32(self, x):
|
||||
"""Prepend an `int32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int32Flags, x)
|
||||
|
||||
def PrependInt64(self, x):
|
||||
"""Prepend an `int64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Int64Flags, x)
|
||||
|
||||
def PrependFloat32(self, x):
|
||||
"""Prepend a `float32` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Float32Flags, x)
|
||||
|
||||
def PrependFloat64(self, x):
|
||||
"""Prepend a `float64` to the Builder buffer.
|
||||
|
||||
Note: aligns and checks for space.
|
||||
"""
|
||||
self.Prepend(N.Float64Flags, x)
|
||||
|
||||
def ForceDefaults(self, forceDefaults):
|
||||
"""In order to save space, fields that are set to their default value
|
||||
|
||||
don't get serialized into the buffer. Forcing defaults provides a
|
||||
way to manually disable this optimization. When set to `True`, will
|
||||
always serialize default values.
|
||||
"""
|
||||
self.forceDefaults = forceDefaults
|
||||
|
||||
##############################################################
|
||||
|
||||
## @cond FLATBUFFERS_INTERNAL
|
||||
def PrependVOffsetT(self, x):
|
||||
self.Prepend(N.VOffsetTFlags, x)
|
||||
|
||||
def Place(self, x, flags):
|
||||
"""Place prepends a value specified by `flags` to the Builder,
|
||||
|
||||
without checking for available space.
|
||||
"""
|
||||
|
||||
N.enforce_number(x, flags)
|
||||
new_head = self.head - flags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(flags.packer_type, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceVOffsetT(self, x):
|
||||
"""PlaceVOffsetT prepends a VOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.VOffsetTFlags)
|
||||
new_head = self.head - N.VOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.voffset, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceSOffsetT(self, x):
|
||||
"""PlaceSOffsetT prepends a SOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.SOffsetTFlags)
|
||||
new_head = self.head - N.SOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.soffset, self.Bytes, new_head, x)
|
||||
|
||||
def PlaceUOffsetT(self, x):
|
||||
"""PlaceUOffsetT prepends a UOffsetT to the Builder, without checking
|
||||
|
||||
for space.
|
||||
"""
|
||||
N.enforce_number(x, N.UOffsetTFlags)
|
||||
new_head = self.head - N.UOffsetTFlags.bytewidth
|
||||
self.head = new_head
|
||||
encode.Write(packer.uoffset, self.Bytes, new_head, x)
|
||||
|
||||
## @endcond
|
||||
|
||||
## @}
|
||||
@@ -0,0 +1,91 @@
|
||||
# Copyright 2016 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""A tiny version of `six` to help with backwards compability.
|
||||
|
||||
Also includes compatibility helpers for numpy.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
PY2 = sys.version_info[0] == 2
|
||||
PY26 = sys.version_info[0:2] == (2, 6)
|
||||
PY27 = sys.version_info[0:2] == (2, 7)
|
||||
PY275 = sys.version_info[0:3] >= (2, 7, 5)
|
||||
PY3 = sys.version_info[0] == 3
|
||||
PY34 = sys.version_info[0:2] >= (3, 4)
|
||||
|
||||
if PY3:
|
||||
import importlib.machinery
|
||||
|
||||
string_types = (str,)
|
||||
binary_types = (bytes, bytearray)
|
||||
range_func = range
|
||||
memoryview_type = memoryview
|
||||
struct_bool_decl = "?"
|
||||
else:
|
||||
import imp
|
||||
|
||||
string_types = (unicode,)
|
||||
if PY26 or PY27:
|
||||
binary_types = (str, bytearray)
|
||||
else:
|
||||
binary_types = (str,)
|
||||
range_func = xrange
|
||||
if PY26 or (PY27 and not PY275):
|
||||
memoryview_type = buffer
|
||||
struct_bool_decl = "<b"
|
||||
else:
|
||||
memoryview_type = memoryview
|
||||
struct_bool_decl = "?"
|
||||
|
||||
# Helper functions to facilitate making numpy optional instead of required
|
||||
|
||||
|
||||
def import_numpy():
|
||||
"""Returns the numpy module if it exists on the system,
|
||||
|
||||
otherwise returns None.
|
||||
"""
|
||||
if PY3:
|
||||
numpy_exists = importlib.machinery.PathFinder.find_spec("numpy") is not None
|
||||
else:
|
||||
try:
|
||||
imp.find_module("numpy")
|
||||
numpy_exists = True
|
||||
except ImportError:
|
||||
numpy_exists = False
|
||||
|
||||
if numpy_exists:
|
||||
# We do this outside of try/except block in case numpy exists
|
||||
# but is not installed correctly. We do not want to catch an
|
||||
# incorrect installation which would manifest as an
|
||||
# ImportError.
|
||||
import numpy as np
|
||||
else:
|
||||
np = None
|
||||
|
||||
return np
|
||||
|
||||
|
||||
class NumpyRequiredForThisFeature(RuntimeError):
|
||||
"""Error raised when user tries to use a feature that
|
||||
|
||||
requires numpy without having numpy installed.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
# NOTE: Future Jython support may require code here (look at `six`).
|
||||
@@ -0,0 +1,45 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import number_types as N
|
||||
from . import packer
|
||||
from .compat import memoryview_type
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
|
||||
np = import_numpy()
|
||||
|
||||
FILE_IDENTIFIER_LENGTH = 4
|
||||
|
||||
|
||||
def Get(packer_type, buf, head):
|
||||
"""Get decodes a value at buf[head] using `packer_type`."""
|
||||
return packer_type.unpack_from(memoryview_type(buf), head)[0]
|
||||
|
||||
|
||||
def GetVectorAsNumpy(numpy_type, buf, count, offset):
|
||||
"""GetVecAsNumpy decodes values starting at buf[head] as
|
||||
|
||||
`numpy_type`, where `numpy_type` is a numpy dtype.
|
||||
"""
|
||||
if np is not None:
|
||||
# TODO: could set .flags.writeable = False to make users jump through
|
||||
# hoops before modifying...
|
||||
return np.frombuffer(buf, dtype=numpy_type, count=count, offset=offset)
|
||||
else:
|
||||
raise NumpyRequiredForThisFeature('Numpy was not found.')
|
||||
|
||||
|
||||
def Write(packer_type, buf, head, n):
|
||||
"""Write encodes `n` at buf[head] using `packer_type`."""
|
||||
packer_type.pack_into(buf, head, n)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,182 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import collections
|
||||
import struct
|
||||
|
||||
from . import packer
|
||||
from .compat import NumpyRequiredForThisFeature, import_numpy
|
||||
|
||||
np = import_numpy()
|
||||
|
||||
# For reference, see:
|
||||
# https://docs.python.org/2/library/ctypes.html#ctypes-fundamental-data-types-2
|
||||
|
||||
# These classes could be collections.namedtuple instances, but those are new
|
||||
# in 2.6 and we want to work towards 2.5 compatability.
|
||||
|
||||
|
||||
class BoolFlags(object):
|
||||
bytewidth = 1
|
||||
min_val = False
|
||||
max_val = True
|
||||
py_type = bool
|
||||
name = "bool"
|
||||
packer_type = packer.boolean
|
||||
|
||||
|
||||
class Uint8Flags(object):
|
||||
bytewidth = 1
|
||||
min_val = 0
|
||||
max_val = (2**8) - 1
|
||||
py_type = int
|
||||
name = "uint8"
|
||||
packer_type = packer.uint8
|
||||
|
||||
|
||||
class Uint16Flags(object):
|
||||
bytewidth = 2
|
||||
min_val = 0
|
||||
max_val = (2**16) - 1
|
||||
py_type = int
|
||||
name = "uint16"
|
||||
packer_type = packer.uint16
|
||||
|
||||
|
||||
class Uint32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = 0
|
||||
max_val = (2**32) - 1
|
||||
py_type = int
|
||||
name = "uint32"
|
||||
packer_type = packer.uint32
|
||||
|
||||
|
||||
class Uint64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = 0
|
||||
max_val = (2**64) - 1
|
||||
py_type = int
|
||||
name = "uint64"
|
||||
packer_type = packer.uint64
|
||||
|
||||
|
||||
class Int8Flags(object):
|
||||
bytewidth = 1
|
||||
min_val = -(2**7)
|
||||
max_val = (2**7) - 1
|
||||
py_type = int
|
||||
name = "int8"
|
||||
packer_type = packer.int8
|
||||
|
||||
|
||||
class Int16Flags(object):
|
||||
bytewidth = 2
|
||||
min_val = -(2**15)
|
||||
max_val = (2**15) - 1
|
||||
py_type = int
|
||||
name = "int16"
|
||||
packer_type = packer.int16
|
||||
|
||||
|
||||
class Int32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = -(2**31)
|
||||
max_val = (2**31) - 1
|
||||
py_type = int
|
||||
name = "int32"
|
||||
packer_type = packer.int32
|
||||
|
||||
|
||||
class Int64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = -(2**63)
|
||||
max_val = (2**63) - 1
|
||||
py_type = int
|
||||
name = "int64"
|
||||
packer_type = packer.int64
|
||||
|
||||
|
||||
class Float32Flags(object):
|
||||
bytewidth = 4
|
||||
min_val = None
|
||||
max_val = None
|
||||
py_type = float
|
||||
name = "float32"
|
||||
packer_type = packer.float32
|
||||
|
||||
|
||||
class Float64Flags(object):
|
||||
bytewidth = 8
|
||||
min_val = None
|
||||
max_val = None
|
||||
py_type = float
|
||||
name = "float64"
|
||||
packer_type = packer.float64
|
||||
|
||||
|
||||
class SOffsetTFlags(Int32Flags):
|
||||
pass
|
||||
|
||||
|
||||
class UOffsetTFlags(Uint32Flags):
|
||||
pass
|
||||
|
||||
|
||||
class VOffsetTFlags(Uint16Flags):
|
||||
pass
|
||||
|
||||
|
||||
def valid_number(n, flags):
|
||||
if flags.min_val is None and flags.max_val is None:
|
||||
return True
|
||||
return flags.min_val <= n <= flags.max_val
|
||||
|
||||
|
||||
def enforce_number(n, flags):
|
||||
if flags.min_val is None and flags.max_val is None:
|
||||
return
|
||||
if not flags.min_val <= n <= flags.max_val:
|
||||
raise TypeError("bad number %s for type %s" % (str(n), flags.name))
|
||||
|
||||
|
||||
def float32_to_uint32(n):
|
||||
packed = struct.pack("<1f", n)
|
||||
(converted,) = struct.unpack("<1L", packed)
|
||||
return converted
|
||||
|
||||
|
||||
def uint32_to_float32(n):
|
||||
packed = struct.pack("<1L", n)
|
||||
(unpacked,) = struct.unpack("<1f", packed)
|
||||
return unpacked
|
||||
|
||||
|
||||
def float64_to_uint64(n):
|
||||
packed = struct.pack("<1d", n)
|
||||
(converted,) = struct.unpack("<1Q", packed)
|
||||
return converted
|
||||
|
||||
|
||||
def uint64_to_float64(n):
|
||||
packed = struct.pack("<1Q", n)
|
||||
(unpacked,) = struct.unpack("<1d", packed)
|
||||
return unpacked
|
||||
|
||||
|
||||
def to_numpy_type(number_type):
|
||||
if np is not None:
|
||||
return np.dtype(number_type.name).newbyteorder("<")
|
||||
else:
|
||||
raise NumpyRequiredForThisFeature("Numpy was not found.")
|
||||
@@ -0,0 +1,41 @@
|
||||
# Copyright 2016 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Provide pre-compiled struct packers for encoding and decoding.
|
||||
|
||||
See: https://docs.python.org/2/library/struct.html#format-characters
|
||||
"""
|
||||
|
||||
import struct
|
||||
from . import compat
|
||||
|
||||
|
||||
boolean = struct.Struct(compat.struct_bool_decl)
|
||||
|
||||
uint8 = struct.Struct("<B")
|
||||
uint16 = struct.Struct("<H")
|
||||
uint32 = struct.Struct("<I")
|
||||
uint64 = struct.Struct("<Q")
|
||||
|
||||
int8 = struct.Struct("<b")
|
||||
int16 = struct.Struct("<h")
|
||||
int32 = struct.Struct("<i")
|
||||
int64 = struct.Struct("<q")
|
||||
|
||||
float32 = struct.Struct("<f")
|
||||
float64 = struct.Struct("<d")
|
||||
|
||||
uoffset = uint32
|
||||
soffset = int32
|
||||
voffset = uint16
|
||||
@@ -0,0 +1,148 @@
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import encode
|
||||
from . import number_types as N
|
||||
|
||||
|
||||
class Table(object):
|
||||
"""Table wraps a byte slice and provides read access to its data.
|
||||
|
||||
The variable `Pos` indicates the root of the FlatBuffers object therein.
|
||||
"""
|
||||
|
||||
__slots__ = ("Bytes", "Pos")
|
||||
|
||||
def __init__(self, buf, pos):
|
||||
N.enforce_number(pos, N.UOffsetTFlags)
|
||||
|
||||
self.Bytes = buf
|
||||
self.Pos = pos
|
||||
|
||||
def Offset(self, vtableOffset):
|
||||
"""Offset provides access into the Table's vtable.
|
||||
|
||||
Deprecated fields are ignored by checking the vtable's length.
|
||||
"""
|
||||
|
||||
vtable = self.Pos - self.Get(N.SOffsetTFlags, self.Pos)
|
||||
vtableEnd = self.Get(N.VOffsetTFlags, vtable)
|
||||
if vtableOffset < vtableEnd:
|
||||
return self.Get(N.VOffsetTFlags, vtable + vtableOffset)
|
||||
return 0
|
||||
|
||||
def Indirect(self, off):
|
||||
"""Indirect retrieves the relative offset stored at `offset`."""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
return off + encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
|
||||
def String(self, off):
|
||||
"""String gets a string from data stored inside the flatbuffer."""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
start = off + N.UOffsetTFlags.bytewidth
|
||||
length = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
return bytes(self.Bytes[start : start + length])
|
||||
|
||||
def VectorLen(self, off):
|
||||
"""VectorLen retrieves the length of the vector whose offset is stored
|
||||
|
||||
at "off" in this object.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
off += encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
ret = encode.Get(N.UOffsetTFlags.packer_type, self.Bytes, off)
|
||||
return ret
|
||||
|
||||
def Vector(self, off):
|
||||
"""Vector retrieves the start of data of the vector whose offset is
|
||||
|
||||
stored at "off" in this object.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
x = off + self.Get(N.UOffsetTFlags, off)
|
||||
# data starts after metadata containing the vector length
|
||||
x += N.UOffsetTFlags.bytewidth
|
||||
return x
|
||||
|
||||
def Union(self, t2, off):
|
||||
"""Union initializes any Table-derived type to point to the union at
|
||||
|
||||
the given offset.
|
||||
"""
|
||||
assert type(t2) is Table
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
|
||||
off += self.Pos
|
||||
t2.Pos = off + self.Get(N.UOffsetTFlags, off)
|
||||
t2.Bytes = self.Bytes
|
||||
|
||||
def Get(self, flags, off):
|
||||
"""Get retrieves a value of the type specified by `flags` at the
|
||||
|
||||
given offset.
|
||||
"""
|
||||
N.enforce_number(off, N.UOffsetTFlags)
|
||||
return flags.py_type(encode.Get(flags.packer_type, self.Bytes, off))
|
||||
|
||||
def GetSlot(self, slot, d, validator_flags):
|
||||
N.enforce_number(slot, N.VOffsetTFlags)
|
||||
if validator_flags is not None:
|
||||
N.enforce_number(d, validator_flags)
|
||||
off = self.Offset(slot)
|
||||
if off == 0:
|
||||
return d
|
||||
return self.Get(validator_flags, self.Pos + off)
|
||||
|
||||
def GetVectorAsNumpy(self, flags, off):
|
||||
"""GetVectorAsNumpy returns the vector that starts at `Vector(off)`
|
||||
|
||||
as a numpy array with the type specified by `flags`. The array is
|
||||
a `view` into Bytes, so modifying the returned array will
|
||||
modify Bytes in place.
|
||||
"""
|
||||
offset = self.Vector(off)
|
||||
length = self.VectorLen(off) # TODO: length accounts for bytewidth, right?
|
||||
numpy_dtype = N.to_numpy_type(flags)
|
||||
return encode.GetVectorAsNumpy(numpy_dtype, self.Bytes, length, offset)
|
||||
|
||||
def GetArrayAsNumpy(self, flags, off, length):
|
||||
"""GetArrayAsNumpy returns the array with fixed width that starts at `Vector(offset)`
|
||||
|
||||
with length `length` as a numpy array with the type specified by `flags`.
|
||||
The
|
||||
array is a `view` into Bytes so modifying the returned will modify Bytes in
|
||||
place.
|
||||
"""
|
||||
numpy_dtype = N.to_numpy_type(flags)
|
||||
return encode.GetVectorAsNumpy(numpy_dtype, self.Bytes, length, off)
|
||||
|
||||
def GetVOffsetTSlot(self, slot, d):
|
||||
"""GetVOffsetTSlot retrieves the VOffsetT that the given vtable location
|
||||
|
||||
points to. If the vtable value is zero, the default value `d`
|
||||
will be returned.
|
||||
"""
|
||||
|
||||
N.enforce_number(slot, N.VOffsetTFlags)
|
||||
N.enforce_number(d, N.VOffsetTFlags)
|
||||
|
||||
off = self.Offset(slot)
|
||||
if off == 0:
|
||||
return d
|
||||
return off
|
||||
@@ -0,0 +1,47 @@
|
||||
# Copyright 2017 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from . import encode
|
||||
from . import number_types
|
||||
from . import packer
|
||||
|
||||
|
||||
def GetSizePrefix(buf, offset):
|
||||
"""Extract the size prefix from a buffer."""
|
||||
return encode.Get(packer.int32, buf, offset)
|
||||
|
||||
|
||||
def GetBufferIdentifier(buf, offset, size_prefixed=False):
|
||||
"""Extract the file_identifier from a buffer"""
|
||||
if size_prefixed:
|
||||
# increase offset by size of UOffsetTFlags
|
||||
offset += number_types.UOffsetTFlags.bytewidth
|
||||
# increase offset by size of root table pointer
|
||||
offset += number_types.UOffsetTFlags.bytewidth
|
||||
# end of FILE_IDENTIFIER
|
||||
end = offset + encode.FILE_IDENTIFIER_LENGTH
|
||||
return buf[offset:end]
|
||||
|
||||
|
||||
def BufferHasIdentifier(buf, offset, file_identifier, size_prefixed=False):
|
||||
got = GetBufferIdentifier(buf, offset, size_prefixed=size_prefixed)
|
||||
return got == file_identifier
|
||||
|
||||
|
||||
def RemoveSizePrefix(buf, offset):
|
||||
"""Create a slice of a size-prefixed buffer that has
|
||||
|
||||
its position advanced just past the size prefix.
|
||||
"""
|
||||
return buf, offset + number_types.Int32Flags.bytewidth
|
||||
Binary file not shown.
@@ -0,0 +1,10 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
# Copyright 2007 Google Inc. All Rights Reserved.
|
||||
|
||||
__version__ = '6.33.5'
|
||||
@@ -0,0 +1,53 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Any helper APIs."""
|
||||
|
||||
from typing import Optional, TypeVar
|
||||
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf.message import Message
|
||||
|
||||
from google.protobuf.any_pb2 import Any
|
||||
|
||||
|
||||
_MessageT = TypeVar('_MessageT', bound=Message)
|
||||
|
||||
|
||||
def pack(
|
||||
msg: Message,
|
||||
type_url_prefix: Optional[str] = 'type.googleapis.com/',
|
||||
deterministic: Optional[bool] = None,
|
||||
) -> Any:
|
||||
any_msg = Any()
|
||||
any_msg.Pack(
|
||||
msg=msg, type_url_prefix=type_url_prefix, deterministic=deterministic
|
||||
)
|
||||
return any_msg
|
||||
|
||||
|
||||
def unpack(any_msg: Any, msg: Message) -> bool:
|
||||
return any_msg.Unpack(msg=msg)
|
||||
|
||||
|
||||
def unpack_as(any_msg: Any, message_type: type[_MessageT]) -> _MessageT:
|
||||
unpacked = message_type()
|
||||
if unpack(any_msg, unpacked):
|
||||
return unpacked
|
||||
else:
|
||||
raise TypeError(
|
||||
f'Attempted to unpack {type_name(any_msg)} to'
|
||||
f' {message_type.__qualname__}'
|
||||
)
|
||||
|
||||
|
||||
def type_name(any_msg: Any) -> str:
|
||||
return any_msg.TypeName()
|
||||
|
||||
|
||||
def is_type(any_msg: Any, des: descriptor.Descriptor) -> bool:
|
||||
return any_msg.Is(des)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/any.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/any.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"6\n\x03\x41ny\x12\x19\n\x08type_url\x18\x01 \x01(\tR\x07typeUrl\x12\x14\n\x05value\x18\x02 \x01(\x0cR\x05valueBv\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_ANY']._serialized_start=46
|
||||
_globals['_ANY']._serialized_end=100
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,47 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/api.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/api.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
|
||||
from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\xdb\x02\n\x03\x41pi\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x31\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.MethodR\x07methods\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x18\n\x07version\x18\x04 \x01(\tR\x07version\x12\x45\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12.\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.MixinR\x06mixins\x12/\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x08 \x01(\tR\x07\x65\x64ition\"\xd4\x02\n\x06Method\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12(\n\x10request_type_url\x18\x02 \x01(\tR\x0erequestTypeUrl\x12+\n\x11request_streaming\x18\x03 \x01(\x08R\x10requestStreaming\x12*\n\x11response_type_url\x18\x04 \x01(\tR\x0fresponseTypeUrl\x12-\n\x12response_streaming\x18\x05 \x01(\x08R\x11responseStreaming\x12\x31\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x33\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.SyntaxB\x02\x18\x01R\x06syntax\x12\x1c\n\x07\x65\x64ition\x18\x08 \x01(\tB\x02\x18\x01R\x07\x65\x64ition\"/\n\x05Mixin\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x12\n\x04root\x18\x02 \x01(\tR\x04rootBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_METHOD'].fields_by_name['syntax']._loaded_options = None
|
||||
_globals['_METHOD'].fields_by_name['syntax']._serialized_options = b'\030\001'
|
||||
_globals['_METHOD'].fields_by_name['edition']._loaded_options = None
|
||||
_globals['_METHOD'].fields_by_name['edition']._serialized_options = b'\030\001'
|
||||
_globals['_API']._serialized_start=113
|
||||
_globals['_API']._serialized_end=460
|
||||
_globals['_METHOD']._serialized_start=463
|
||||
_globals['_METHOD']._serialized_end=803
|
||||
_globals['_MIXIN']._serialized_start=805
|
||||
_globals['_MIXIN']._serialized_end=852
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
+46
@@ -0,0 +1,46 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/compiler/plugin.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/compiler/plugin.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"c\n\x07Version\x12\x14\n\x05major\x18\x01 \x01(\x05R\x05major\x12\x14\n\x05minor\x18\x02 \x01(\x05R\x05minor\x12\x14\n\x05patch\x18\x03 \x01(\x05R\x05patch\x12\x16\n\x06suffix\x18\x04 \x01(\tR\x06suffix\"\xcf\x02\n\x14\x43odeGeneratorRequest\x12(\n\x10\x66ile_to_generate\x18\x01 \x03(\tR\x0e\x66ileToGenerate\x12\x1c\n\tparameter\x18\x02 \x01(\tR\tparameter\x12\x43\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProtoR\tprotoFile\x12\\\n\x17source_file_descriptors\x18\x11 \x03(\x0b\x32$.google.protobuf.FileDescriptorProtoR\x15sourceFileDescriptors\x12L\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.VersionR\x0f\x63ompilerVersion\"\x85\x04\n\x15\x43odeGeneratorResponse\x12\x14\n\x05\x65rror\x18\x01 \x01(\tR\x05\x65rror\x12-\n\x12supported_features\x18\x02 \x01(\x04R\x11supportedFeatures\x12\'\n\x0fminimum_edition\x18\x03 \x01(\x05R\x0eminimumEdition\x12\'\n\x0fmaximum_edition\x18\x04 \x01(\x05R\x0emaximumEdition\x12H\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.FileR\x04\x66ile\x1a\xb1\x01\n\x04\x46ile\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\'\n\x0finsertion_point\x18\x02 \x01(\tR\x0einsertionPoint\x12\x18\n\x07\x63ontent\x18\x0f \x01(\tR\x07\x63ontent\x12R\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfoR\x11generatedCodeInfo\"W\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x12\x1d\n\x19\x46\x45\x41TURE_SUPPORTS_EDITIONS\x10\x02\x42r\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb\xaa\x02\x18Google.Protobuf.Compiler')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb\252\002\030Google.Protobuf.Compiler'
|
||||
_globals['_VERSION']._serialized_start=101
|
||||
_globals['_VERSION']._serialized_end=200
|
||||
_globals['_CODEGENERATORREQUEST']._serialized_start=203
|
||||
_globals['_CODEGENERATORREQUEST']._serialized_end=538
|
||||
_globals['_CODEGENERATORRESPONSE']._serialized_start=541
|
||||
_globals['_CODEGENERATORRESPONSE']._serialized_end=1058
|
||||
_globals['_CODEGENERATORRESPONSE_FILE']._serialized_start=792
|
||||
_globals['_CODEGENERATORRESPONSE_FILE']._serialized_end=969
|
||||
_globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_start=971
|
||||
_globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_end=1058
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
File diff suppressed because it is too large
Load Diff
+172
@@ -0,0 +1,172 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides a container for DescriptorProtos."""
|
||||
|
||||
__author__ = 'matthewtoia@google.com (Matt Toia)'
|
||||
|
||||
import warnings
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class DescriptorDatabaseConflictingDefinitionError(Error):
|
||||
"""Raised when a proto is added with the same name & different descriptor."""
|
||||
|
||||
|
||||
class DescriptorDatabase(object):
|
||||
"""A container accepting FileDescriptorProtos and maps DescriptorProtos."""
|
||||
|
||||
def __init__(self):
|
||||
self._file_desc_protos_by_file = {}
|
||||
self._file_desc_protos_by_symbol = {}
|
||||
|
||||
def Add(self, file_desc_proto):
|
||||
"""Adds the FileDescriptorProto and its types to this database.
|
||||
|
||||
Args:
|
||||
file_desc_proto: The FileDescriptorProto to add.
|
||||
Raises:
|
||||
DescriptorDatabaseConflictingDefinitionError: if an attempt is made to
|
||||
add a proto with the same name but different definition than an
|
||||
existing proto in the database.
|
||||
"""
|
||||
proto_name = file_desc_proto.name
|
||||
if proto_name not in self._file_desc_protos_by_file:
|
||||
self._file_desc_protos_by_file[proto_name] = file_desc_proto
|
||||
elif self._file_desc_protos_by_file[proto_name] != file_desc_proto:
|
||||
raise DescriptorDatabaseConflictingDefinitionError(
|
||||
'%s already added, but with different descriptor.' % proto_name)
|
||||
else:
|
||||
return
|
||||
|
||||
# Add all the top-level descriptors to the index.
|
||||
package = file_desc_proto.package
|
||||
for message in file_desc_proto.message_type:
|
||||
for name in _ExtractSymbols(message, package):
|
||||
self._AddSymbol(name, file_desc_proto)
|
||||
for enum in file_desc_proto.enum_type:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, enum.name)) if package else enum.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
for enum_value in enum.value:
|
||||
self._file_desc_protos_by_symbol[
|
||||
'.'.join((package, enum_value.name)) if package else enum_value.name
|
||||
] = file_desc_proto
|
||||
for extension in file_desc_proto.extension:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, extension.name)) if package else extension.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
for service in file_desc_proto.service:
|
||||
self._AddSymbol(
|
||||
('.'.join((package, service.name)) if package else service.name),
|
||||
file_desc_proto,
|
||||
)
|
||||
|
||||
def FindFileByName(self, name):
|
||||
"""Finds the file descriptor proto by file name.
|
||||
|
||||
Typically the file name is a relative path ending to a .proto file. The
|
||||
proto with the given name will have to have been added to this database
|
||||
using the Add method or else an error will be raised.
|
||||
|
||||
Args:
|
||||
name: The file name to find.
|
||||
|
||||
Returns:
|
||||
The file descriptor proto matching the name.
|
||||
|
||||
Raises:
|
||||
KeyError if no file by the given name was added.
|
||||
"""
|
||||
|
||||
return self._file_desc_protos_by_file[name]
|
||||
|
||||
def FindFileContainingSymbol(self, symbol):
|
||||
"""Finds the file descriptor proto containing the specified symbol.
|
||||
|
||||
The symbol should be a fully qualified name including the file descriptor's
|
||||
package and any containing messages. Some examples:
|
||||
|
||||
'some.package.name.Message'
|
||||
'some.package.name.Message.NestedEnum'
|
||||
'some.package.name.Message.some_field'
|
||||
|
||||
The file descriptor proto containing the specified symbol must be added to
|
||||
this database using the Add method or else an error will be raised.
|
||||
|
||||
Args:
|
||||
symbol: The fully qualified symbol name.
|
||||
|
||||
Returns:
|
||||
The file descriptor proto containing the symbol.
|
||||
|
||||
Raises:
|
||||
KeyError if no file contains the specified symbol.
|
||||
"""
|
||||
if symbol.count('.') == 1 and symbol[0] == '.':
|
||||
symbol = symbol.lstrip('.')
|
||||
warnings.warn(
|
||||
'Please remove the leading "." when '
|
||||
'FindFileContainingSymbol, this will turn to error '
|
||||
'in 2026 Jan.',
|
||||
RuntimeWarning,
|
||||
)
|
||||
try:
|
||||
return self._file_desc_protos_by_symbol[symbol]
|
||||
except KeyError:
|
||||
# Fields, enum values, and nested extensions are not in
|
||||
# _file_desc_protos_by_symbol. Try to find the top level
|
||||
# descriptor. Non-existent nested symbol under a valid top level
|
||||
# descriptor can also be found. The behavior is the same with
|
||||
# protobuf C++.
|
||||
top_level, _, _ = symbol.rpartition('.')
|
||||
try:
|
||||
return self._file_desc_protos_by_symbol[top_level]
|
||||
except KeyError:
|
||||
# Raise the original symbol as a KeyError for better diagnostics.
|
||||
raise KeyError(symbol)
|
||||
|
||||
def FindFileContainingExtension(self, extendee_name, extension_number):
|
||||
# TODO: implement this API.
|
||||
return None
|
||||
|
||||
def FindAllExtensionNumbers(self, extendee_name):
|
||||
# TODO: implement this API.
|
||||
return []
|
||||
|
||||
def _AddSymbol(self, name, file_desc_proto):
|
||||
if name in self._file_desc_protos_by_symbol:
|
||||
warn_msg = ('Conflict register for file "' + file_desc_proto.name +
|
||||
'": ' + name +
|
||||
' is already defined in file "' +
|
||||
self._file_desc_protos_by_symbol[name].name + '"')
|
||||
warnings.warn(warn_msg, RuntimeWarning)
|
||||
self._file_desc_protos_by_symbol[name] = file_desc_proto
|
||||
|
||||
|
||||
def _ExtractSymbols(desc_proto, package):
|
||||
"""Pulls out all the symbols from a descriptor proto.
|
||||
|
||||
Args:
|
||||
desc_proto: The proto to extract symbols from.
|
||||
package: The package containing the descriptor type.
|
||||
|
||||
Yields:
|
||||
The fully qualified name found in the descriptor.
|
||||
"""
|
||||
message_name = package + '.' + desc_proto.name if package else desc_proto.name
|
||||
yield message_name
|
||||
for nested_type in desc_proto.nested_type:
|
||||
for symbol in _ExtractSymbols(nested_type, message_name):
|
||||
yield symbol
|
||||
for enum_type in desc_proto.enum_type:
|
||||
yield '.'.join((message_name, enum_type.name))
|
||||
+3369
File diff suppressed because one or more lines are too long
+1373
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,100 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Duration helper APIs."""
|
||||
|
||||
import datetime
|
||||
|
||||
from google.protobuf.duration_pb2 import Duration
|
||||
|
||||
|
||||
def from_json_string(value: str) -> Duration:
|
||||
"""Converts a string to Duration.
|
||||
|
||||
Args:
|
||||
value: A string to be converted. The string must end with 's'. Any
|
||||
fractional digits (or none) are accepted as long as they fit into
|
||||
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s"
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
duration = Duration()
|
||||
duration.FromJsonString(value)
|
||||
return duration
|
||||
|
||||
|
||||
def from_microseconds(micros: float) -> Duration:
|
||||
"""Converts microseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromMicroseconds(micros)
|
||||
return duration
|
||||
|
||||
|
||||
def from_milliseconds(millis: float) -> Duration:
|
||||
"""Converts milliseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromMilliseconds(millis)
|
||||
return duration
|
||||
|
||||
|
||||
def from_nanoseconds(nanos: float) -> Duration:
|
||||
"""Converts nanoseconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromNanoseconds(nanos)
|
||||
return duration
|
||||
|
||||
|
||||
def from_seconds(seconds: float) -> Duration:
|
||||
"""Converts seconds to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromSeconds(seconds)
|
||||
return duration
|
||||
|
||||
|
||||
def from_timedelta(td: datetime.timedelta) -> Duration:
|
||||
"""Converts timedelta to Duration."""
|
||||
duration = Duration()
|
||||
duration.FromTimedelta(td)
|
||||
return duration
|
||||
|
||||
|
||||
def to_json_string(duration: Duration) -> str:
|
||||
"""Converts Duration to string format.
|
||||
|
||||
Returns:
|
||||
A string converted from self. The string format will contains
|
||||
3, 6, or 9 fractional digits depending on the precision required to
|
||||
represent the exact Duration value. For example: "1s", "1.010s",
|
||||
"1.000000100s", "-3.100s"
|
||||
"""
|
||||
return duration.ToJsonString()
|
||||
|
||||
|
||||
def to_microseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to microseconds."""
|
||||
return duration.ToMicroseconds()
|
||||
|
||||
|
||||
def to_milliseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to milliseconds."""
|
||||
return duration.ToMilliseconds()
|
||||
|
||||
|
||||
def to_nanoseconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to nanoseconds."""
|
||||
return duration.ToNanoseconds()
|
||||
|
||||
|
||||
def to_seconds(duration: Duration) -> int:
|
||||
"""Converts a Duration to seconds."""
|
||||
return duration.ToSeconds()
|
||||
|
||||
|
||||
def to_timedelta(duration: Duration) -> datetime.timedelta:
|
||||
"""Converts Duration to timedelta."""
|
||||
return duration.ToTimedelta()
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/duration.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/duration.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\":\n\x08\x44uration\x12\x18\n\x07seconds\x18\x01 \x01(\x03R\x07seconds\x12\x14\n\x05nanos\x18\x02 \x01(\x05R\x05nanosB\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_DURATION']._serialized_start=51
|
||||
_globals['_DURATION']._serialized_end=109
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/empty.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/empty.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_EMPTY']._serialized_start=48
|
||||
_globals['_EMPTY']._serialized_end=55
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -0,0 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: google/protobuf/field_mask.proto
|
||||
# Protobuf Python Version: 6.33.5
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
33,
|
||||
5,
|
||||
'',
|
||||
'google/protobuf/field_mask.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"!\n\tFieldMask\x12\x14\n\x05paths\x18\x01 \x03(\tR\x05pathsB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
_globals['DESCRIPTOR']._loaded_options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
|
||||
_globals['_FIELDMASK']._serialized_start=53
|
||||
_globals['_FIELDMASK']._serialized_end=86
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
+7
@@ -0,0 +1,7 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
+136
@@ -0,0 +1,136 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Determine which implementation of the protobuf API is used in this process.
|
||||
"""
|
||||
|
||||
import importlib
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
_GOOGLE3_PYTHON_UPB_DEFAULT = True
|
||||
|
||||
|
||||
def _ApiVersionToImplementationType(api_version):
|
||||
if api_version == 2:
|
||||
return 'cpp'
|
||||
if api_version == 1:
|
||||
raise ValueError('api_version=1 is no longer supported.')
|
||||
if api_version == 0:
|
||||
return 'python'
|
||||
return None
|
||||
|
||||
|
||||
_implementation_type = None
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf.internal import _api_implementation
|
||||
# The compile-time constants in the _api_implementation module can be used to
|
||||
# switch to a certain implementation of the Python API at build time.
|
||||
_implementation_type = _ApiVersionToImplementationType(
|
||||
_api_implementation.api_version)
|
||||
except ImportError:
|
||||
pass # Unspecified by compiler flags.
|
||||
|
||||
|
||||
def _CanImport(mod_name):
|
||||
try:
|
||||
mod = importlib.import_module(mod_name)
|
||||
# Work around a known issue in the classic bootstrap .par import hook.
|
||||
if not mod:
|
||||
raise ImportError(mod_name + ' import succeeded but was None')
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
|
||||
|
||||
if _implementation_type is None:
|
||||
if _CanImport('google._upb._message'):
|
||||
_implementation_type = 'upb'
|
||||
elif _CanImport('google.protobuf.pyext._message'):
|
||||
_implementation_type = 'cpp'
|
||||
else:
|
||||
_implementation_type = 'python'
|
||||
|
||||
|
||||
# This environment variable can be used to switch to a certain implementation
|
||||
# of the Python API, overriding the compile-time constants in the
|
||||
# _api_implementation module. Right now only 'python', 'cpp' and 'upb' are
|
||||
# valid values. Any other value will raise error.
|
||||
_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION',
|
||||
_implementation_type)
|
||||
|
||||
if _implementation_type not in ('python', 'cpp', 'upb'):
|
||||
raise ValueError('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION {0} is not '
|
||||
'supported. Please set to \'python\', \'cpp\' or '
|
||||
'\'upb\'.'.format(_implementation_type))
|
||||
|
||||
if 'PyPy' in sys.version and _implementation_type == 'cpp':
|
||||
warnings.warn('PyPy does not work yet with cpp protocol buffers. '
|
||||
'Falling back to the python implementation.')
|
||||
_implementation_type = 'python'
|
||||
|
||||
_c_module = None
|
||||
|
||||
if _implementation_type == 'cpp':
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf.pyext import _message
|
||||
sys.modules['google3.net.proto2.python.internal.cpp._message'] = _message
|
||||
_c_module = _message
|
||||
del _message
|
||||
except ImportError:
|
||||
# TODO: fail back to python
|
||||
warnings.warn(
|
||||
'Selected implementation cpp is not available.')
|
||||
pass
|
||||
|
||||
if _implementation_type == 'upb':
|
||||
try:
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google._upb import _message
|
||||
_c_module = _message
|
||||
del _message
|
||||
except ImportError:
|
||||
warnings.warn('Selected implementation upb is not available. '
|
||||
'Falling back to the python implementation.')
|
||||
_implementation_type = 'python'
|
||||
pass
|
||||
|
||||
# Detect if serialization should be deterministic by default
|
||||
try:
|
||||
# The presence of this module in a build allows the proto implementation to
|
||||
# be upgraded merely via build deps.
|
||||
#
|
||||
# NOTE: Merely importing this automatically enables deterministic proto
|
||||
# serialization for C++ code, but we still need to export it as a boolean so
|
||||
# that we can do the same for `_implementation_type == 'python'`.
|
||||
#
|
||||
# NOTE2: It is possible for C++ code to enable deterministic serialization by
|
||||
# default _without_ affecting Python code, if the C++ implementation is not in
|
||||
# use by this module. That is intended behavior, so we don't actually expose
|
||||
# this boolean outside of this module.
|
||||
#
|
||||
# pylint: disable=g-import-not-at-top,unused-import
|
||||
from google.protobuf import enable_deterministic_proto_serialization
|
||||
_python_deterministic_proto_serialization = True
|
||||
except ImportError:
|
||||
_python_deterministic_proto_serialization = False
|
||||
|
||||
|
||||
# Usage of this function is discouraged. Clients shouldn't care which
|
||||
# implementation of the API is in use. Note that there is no guarantee
|
||||
# that differences between APIs will be maintained.
|
||||
# Please don't use this function if possible.
|
||||
def Type():
|
||||
return _implementation_type
|
||||
|
||||
|
||||
# For internal use only
|
||||
def IsPythonDefaultSerializationDeterministic():
|
||||
return _python_deterministic_proto_serialization
|
||||
+118
@@ -0,0 +1,118 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Builds descriptors, message classes and services for generated _pb2.py.
|
||||
|
||||
This file is only called in python generated _pb2.py files. It builds
|
||||
descriptors, message classes and services that users can directly use
|
||||
in generated code.
|
||||
"""
|
||||
|
||||
__author__ = 'jieluo@google.com (Jie Luo)'
|
||||
|
||||
from google.protobuf.internal import enum_type_wrapper
|
||||
from google.protobuf.internal import python_message
|
||||
from google.protobuf import message as _message
|
||||
from google.protobuf import reflection as _reflection
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
def BuildMessageAndEnumDescriptors(file_des, module):
|
||||
"""Builds message and enum descriptors.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
|
||||
def BuildNestedDescriptors(msg_des, prefix):
|
||||
for (name, nested_msg) in msg_des.nested_types_by_name.items():
|
||||
module_name = prefix + name.upper()
|
||||
module[module_name] = nested_msg
|
||||
BuildNestedDescriptors(nested_msg, module_name + '_')
|
||||
for enum_des in msg_des.enum_types:
|
||||
module[prefix + enum_des.name.upper()] = enum_des
|
||||
|
||||
for (name, msg_des) in file_des.message_types_by_name.items():
|
||||
module_name = '_' + name.upper()
|
||||
module[module_name] = msg_des
|
||||
BuildNestedDescriptors(msg_des, module_name + '_')
|
||||
|
||||
|
||||
def BuildTopDescriptorsAndMessages(file_des, module_name, module):
|
||||
"""Builds top level descriptors and message classes.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module_name: str, the name of generated _pb2 module
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
|
||||
def BuildMessage(msg_des, prefix):
|
||||
create_dict = {}
|
||||
for (name, nested_msg) in msg_des.nested_types_by_name.items():
|
||||
create_dict[name] = BuildMessage(nested_msg, prefix + msg_des.name + '.')
|
||||
create_dict['DESCRIPTOR'] = msg_des
|
||||
create_dict['__module__'] = module_name
|
||||
create_dict['__qualname__'] = prefix + msg_des.name
|
||||
message_class = _reflection.GeneratedProtocolMessageType(
|
||||
msg_des.name, (_message.Message,), create_dict)
|
||||
_sym_db.RegisterMessage(message_class)
|
||||
return message_class
|
||||
|
||||
# top level enums
|
||||
for (name, enum_des) in file_des.enum_types_by_name.items():
|
||||
module['_' + name.upper()] = enum_des
|
||||
module[name] = enum_type_wrapper.EnumTypeWrapper(enum_des)
|
||||
for enum_value in enum_des.values:
|
||||
module[enum_value.name] = enum_value.number
|
||||
|
||||
# top level extensions
|
||||
for (name, extension_des) in file_des.extensions_by_name.items():
|
||||
module[name.upper() + '_FIELD_NUMBER'] = extension_des.number
|
||||
module[name] = extension_des
|
||||
|
||||
# services
|
||||
for (name, service) in file_des.services_by_name.items():
|
||||
module['_' + name.upper()] = service
|
||||
|
||||
# Build messages.
|
||||
for (name, msg_des) in file_des.message_types_by_name.items():
|
||||
module[name] = BuildMessage(msg_des, '')
|
||||
|
||||
|
||||
def AddHelpersToExtensions(file_des):
|
||||
"""no-op to keep old generated code work with new runtime.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
"""
|
||||
# TODO: Remove this on-op
|
||||
return
|
||||
|
||||
|
||||
def BuildServices(file_des, module_name, module):
|
||||
"""Builds services classes and services stub class.
|
||||
|
||||
Args:
|
||||
file_des: FileDescriptor of the .proto file
|
||||
module_name: str, the name of generated _pb2 module
|
||||
module: Generated _pb2 module
|
||||
"""
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf import service_reflection
|
||||
# pylint: enable=g-import-not-at-top
|
||||
for (name, service) in file_des.services_by_name.items():
|
||||
module[name] = service_reflection.GeneratedServiceType(
|
||||
name, (),
|
||||
dict(DESCRIPTOR=service, __module__=module_name))
|
||||
stub_name = name + '_Stub'
|
||||
module[stub_name] = service_reflection.GeneratedServiceStubType(
|
||||
stub_name, (module[name],),
|
||||
dict(DESCRIPTOR=service, __module__=module_name))
|
||||
+690
@@ -0,0 +1,690 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains container classes to represent different protocol buffer types.
|
||||
|
||||
This file defines container classes which represent categories of protocol
|
||||
buffer field types which need extra maintenance. Currently these categories
|
||||
are:
|
||||
|
||||
- Repeated scalar fields - These are all repeated fields which aren't
|
||||
composite (e.g. they are of simple types like int32, string, etc).
|
||||
- Repeated composite fields - Repeated fields which are composite. This
|
||||
includes groups and nested messages.
|
||||
"""
|
||||
|
||||
import collections.abc
|
||||
import copy
|
||||
import pickle
|
||||
from typing import (
|
||||
Any,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
MutableMapping,
|
||||
MutableSequence,
|
||||
NoReturn,
|
||||
Optional,
|
||||
Sequence,
|
||||
TypeVar,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
|
||||
_T = TypeVar('_T')
|
||||
_K = TypeVar('_K')
|
||||
_V = TypeVar('_V')
|
||||
|
||||
|
||||
class BaseContainer(Sequence[_T]):
|
||||
"""Base container class."""
|
||||
|
||||
# Minimizes memory usage and disallows assignment to other attributes.
|
||||
__slots__ = ['_message_listener', '_values']
|
||||
|
||||
def __init__(self, message_listener: Any) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The RepeatedScalarFieldContainer will call this object's
|
||||
Modified() method when it is modified.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._values = []
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: int) -> _T:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __getitem__(self, key: slice) -> List[_T]:
|
||||
...
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Retrieves item by the specified key."""
|
||||
return self._values[key]
|
||||
|
||||
def __len__(self) -> int:
|
||||
"""Returns the number of elements in the container."""
|
||||
return len(self._values)
|
||||
|
||||
def __ne__(self, other: Any) -> bool:
|
||||
"""Checks if another instance isn't equal to this one."""
|
||||
# The concrete classes should define __eq__.
|
||||
return not self == other
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def sort(self, *args, **kwargs) -> None:
|
||||
# Continue to support the old sort_function keyword argument.
|
||||
# This is expected to be a rare occurrence, so use LBYL to avoid
|
||||
# the overhead of actually catching KeyError.
|
||||
if 'sort_function' in kwargs:
|
||||
kwargs['cmp'] = kwargs.pop('sort_function')
|
||||
self._values.sort(*args, **kwargs)
|
||||
|
||||
def reverse(self) -> None:
|
||||
self._values.reverse()
|
||||
|
||||
|
||||
# TODO: Remove this. BaseContainer does *not* conform to
|
||||
# MutableSequence, only its subclasses do.
|
||||
collections.abc.MutableSequence.register(BaseContainer)
|
||||
|
||||
|
||||
class RepeatedScalarFieldContainer(BaseContainer[_T], MutableSequence[_T]):
|
||||
"""Simple, type-checked, list-like container for holding repeated scalars."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_type_checker']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
type_checker: Any,
|
||||
) -> None:
|
||||
"""Args:
|
||||
|
||||
message_listener: A MessageListener implementation. The
|
||||
RepeatedScalarFieldContainer will call this object's Modified() method
|
||||
when it is modified.
|
||||
type_checker: A type_checkers.ValueChecker instance to run on elements
|
||||
inserted into this container.
|
||||
"""
|
||||
super().__init__(message_listener)
|
||||
self._type_checker = type_checker
|
||||
|
||||
def append(self, value: _T) -> None:
|
||||
"""Appends an item to the list. Similar to list.append()."""
|
||||
self._values.append(self._type_checker.CheckValue(value))
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def insert(self, key: int, value: _T) -> None:
|
||||
"""Inserts the item at the specified position. Similar to list.insert()."""
|
||||
self._values.insert(key, self._type_checker.CheckValue(value))
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def extend(self, elem_seq: Iterable[_T]) -> None:
|
||||
"""Extends by appending the given iterable. Similar to list.extend()."""
|
||||
elem_seq_iter = iter(elem_seq)
|
||||
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
|
||||
if new_values:
|
||||
self._values.extend(new_values)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def MergeFrom(
|
||||
self,
|
||||
other: Union['RepeatedScalarFieldContainer[_T]', Iterable[_T]],
|
||||
) -> None:
|
||||
"""Appends the contents of another repeated field of the same type to this
|
||||
one. We do not check the types of the individual fields.
|
||||
"""
|
||||
self._values.extend(other)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def remove(self, elem: _T):
|
||||
"""Removes an item from the list. Similar to list.remove()."""
|
||||
self._values.remove(elem)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def pop(self, key: Optional[int] = -1) -> _T:
|
||||
"""Removes and returns an item at a given index. Similar to list.pop()."""
|
||||
value = self._values[key]
|
||||
self.__delitem__(key)
|
||||
return value
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: int, value: _T) -> None:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
|
||||
...
|
||||
|
||||
def __setitem__(self, key, value) -> None:
|
||||
"""Sets the item on the specified position."""
|
||||
if isinstance(key, slice):
|
||||
if key.step is not None:
|
||||
raise ValueError('Extended slices not supported')
|
||||
self._values[key] = map(self._type_checker.CheckValue, value)
|
||||
self._message_listener.Modified()
|
||||
else:
|
||||
self._values[key] = self._type_checker.CheckValue(value)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __delitem__(self, key: Union[int, slice]) -> None:
|
||||
"""Deletes the item at the specified position."""
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
"""Compares the current instance with another one."""
|
||||
if self is other:
|
||||
return True
|
||||
# Special case for the same type which should be common and fast.
|
||||
if isinstance(other, self.__class__):
|
||||
return other._values == self._values
|
||||
# We are presumably comparing against some other sequence type.
|
||||
return other == self._values
|
||||
|
||||
def __deepcopy__(
|
||||
self,
|
||||
unused_memo: Any = None,
|
||||
) -> 'RepeatedScalarFieldContainer[_T]':
|
||||
clone = RepeatedScalarFieldContainer(
|
||||
copy.deepcopy(self._message_listener), self._type_checker)
|
||||
clone.MergeFrom(self)
|
||||
return clone
|
||||
|
||||
def __reduce__(self, **kwargs) -> NoReturn:
|
||||
raise pickle.PickleError(
|
||||
"Can't pickle repeated scalar fields, convert to list first")
|
||||
|
||||
|
||||
# TODO: Constrain T to be a subtype of Message.
|
||||
class RepeatedCompositeFieldContainer(BaseContainer[_T], MutableSequence[_T]):
|
||||
"""Simple, list-like container for holding repeated composite fields."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_message_descriptor']
|
||||
|
||||
def __init__(self, message_listener: Any, message_descriptor: Any) -> None:
|
||||
"""
|
||||
Note that we pass in a descriptor instead of the generated directly,
|
||||
since at the time we construct a _RepeatedCompositeFieldContainer we
|
||||
haven't yet necessarily initialized the type that will be contained in the
|
||||
container.
|
||||
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The RepeatedCompositeFieldContainer will call this object's
|
||||
Modified() method when it is modified.
|
||||
message_descriptor: A Descriptor instance describing the protocol type
|
||||
that should be present in this container. We'll use the
|
||||
_concrete_class field of this descriptor when the client calls add().
|
||||
"""
|
||||
super().__init__(message_listener)
|
||||
self._message_descriptor = message_descriptor
|
||||
|
||||
def add(self, **kwargs: Any) -> _T:
|
||||
"""Adds a new element at the end of the list and returns it. Keyword
|
||||
arguments may be used to initialize the element.
|
||||
"""
|
||||
new_element = self._message_descriptor._concrete_class(**kwargs)
|
||||
new_element._SetListener(self._message_listener)
|
||||
self._values.append(new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
return new_element
|
||||
|
||||
def append(self, value: _T) -> None:
|
||||
"""Appends one element by copying the message."""
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
new_element.CopyFrom(value)
|
||||
self._values.append(new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def insert(self, key: int, value: _T) -> None:
|
||||
"""Inserts the item at the specified position by copying."""
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
new_element.CopyFrom(value)
|
||||
self._values.insert(key, new_element)
|
||||
if not self._message_listener.dirty:
|
||||
self._message_listener.Modified()
|
||||
|
||||
def extend(self, elem_seq: Iterable[_T]) -> None:
|
||||
"""Extends by appending the given sequence of elements of the same type
|
||||
|
||||
as this one, copying each individual message.
|
||||
"""
|
||||
message_class = self._message_descriptor._concrete_class
|
||||
listener = self._message_listener
|
||||
values = self._values
|
||||
for message in elem_seq:
|
||||
new_element = message_class()
|
||||
new_element._SetListener(listener)
|
||||
new_element.MergeFrom(message)
|
||||
values.append(new_element)
|
||||
listener.Modified()
|
||||
|
||||
def MergeFrom(
|
||||
self,
|
||||
other: Union['RepeatedCompositeFieldContainer[_T]', Iterable[_T]],
|
||||
) -> None:
|
||||
"""Appends the contents of another repeated field of the same type to this
|
||||
one, copying each individual message.
|
||||
"""
|
||||
self.extend(other)
|
||||
|
||||
def remove(self, elem: _T) -> None:
|
||||
"""Removes an item from the list. Similar to list.remove()."""
|
||||
self._values.remove(elem)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def pop(self, key: Optional[int] = -1) -> _T:
|
||||
"""Removes and returns an item at a given index. Similar to list.pop()."""
|
||||
value = self._values[key]
|
||||
self.__delitem__(key)
|
||||
return value
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: int, value: _T) -> None:
|
||||
...
|
||||
|
||||
@overload
|
||||
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
|
||||
...
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
# This method is implemented to make RepeatedCompositeFieldContainer
|
||||
# structurally compatible with typing.MutableSequence. It is
|
||||
# otherwise unsupported and will always raise an error.
|
||||
raise TypeError(
|
||||
f'{self.__class__.__name__} object does not support item assignment')
|
||||
|
||||
def __delitem__(self, key: Union[int, slice]) -> None:
|
||||
"""Deletes the item at the specified position."""
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
"""Compares the current instance with another one."""
|
||||
if self is other:
|
||||
return True
|
||||
if not isinstance(other, self.__class__):
|
||||
raise TypeError('Can only compare repeated composite fields against '
|
||||
'other repeated composite fields.')
|
||||
return self._values == other._values
|
||||
|
||||
|
||||
class ScalarMap(MutableMapping[_K, _V]):
|
||||
"""Simple, type-checked, dict-like container for holding repeated scalars."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener',
|
||||
'_entry_descriptor']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
key_checker: Any,
|
||||
value_checker: Any,
|
||||
entry_descriptor: Any,
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The ScalarMap will call this object's Modified() method when it
|
||||
is modified.
|
||||
key_checker: A type_checkers.ValueChecker instance to run on keys
|
||||
inserted into this container.
|
||||
value_checker: A type_checkers.ValueChecker instance to run on values
|
||||
inserted into this container.
|
||||
entry_descriptor: The MessageDescriptor of a map entry: key and value.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._key_checker = key_checker
|
||||
self._value_checker = value_checker
|
||||
self._entry_descriptor = entry_descriptor
|
||||
self._values = {}
|
||||
|
||||
def __getitem__(self, key: _K) -> _V:
|
||||
try:
|
||||
return self._values[key]
|
||||
except KeyError:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
val = self._value_checker.DefaultValue()
|
||||
self._values[key] = val
|
||||
return val
|
||||
|
||||
def __contains__(self, item: _K) -> bool:
|
||||
# We check the key's type to match the strong-typing flavor of the API.
|
||||
# Also this makes it easier to match the behavior of the C++ implementation.
|
||||
self._key_checker.CheckValue(item)
|
||||
return item in self._values
|
||||
|
||||
@overload
|
||||
def get(self, key: _K) -> Optional[_V]:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get(self, key: _K, default: _T) -> Union[_V, _T]:
|
||||
...
|
||||
|
||||
# We need to override this explicitly, because our defaultdict-like behavior
|
||||
# will make the default implementation (from our base class) always insert
|
||||
# the key.
|
||||
def get(self, key, default=None):
|
||||
if key in self:
|
||||
return self[key]
|
||||
else:
|
||||
return default
|
||||
|
||||
def __setitem__(self, key: _K, value: _V) -> _T:
|
||||
checked_key = self._key_checker.CheckValue(key)
|
||||
checked_value = self._value_checker.CheckValue(value)
|
||||
self._values[checked_key] = checked_value
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __delitem__(self, key: _K) -> None:
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._values)
|
||||
|
||||
def __iter__(self) -> Iterator[_K]:
|
||||
return iter(self._values)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def setdefault(self, key: _K, value: Optional[_V] = None) -> _V:
|
||||
if value == None:
|
||||
raise ValueError('The value for scalar map setdefault must be set.')
|
||||
if key not in self._values:
|
||||
self.__setitem__(key, value)
|
||||
return self[key]
|
||||
|
||||
def MergeFrom(self, other: 'ScalarMap[_K, _V]') -> None:
|
||||
self._values.update(other._values)
|
||||
self._message_listener.Modified()
|
||||
|
||||
def InvalidateIterators(self) -> None:
|
||||
# It appears that the only way to reliably invalidate iterators to
|
||||
# self._values is to ensure that its size changes.
|
||||
original = self._values
|
||||
self._values = original.copy()
|
||||
original[None] = None
|
||||
|
||||
# This is defined in the abstract base, but we can do it much more cheaply.
|
||||
def clear(self) -> None:
|
||||
self._values.clear()
|
||||
self._message_listener.Modified()
|
||||
|
||||
def GetEntryClass(self) -> Any:
|
||||
return self._entry_descriptor._concrete_class
|
||||
|
||||
|
||||
class MessageMap(MutableMapping[_K, _V]):
|
||||
"""Simple, type-checked, dict-like container for with submessage values."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_key_checker', '_values', '_message_listener',
|
||||
'_message_descriptor', '_entry_descriptor']
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message_listener: Any,
|
||||
message_descriptor: Any,
|
||||
key_checker: Any,
|
||||
entry_descriptor: Any,
|
||||
) -> None:
|
||||
"""
|
||||
Args:
|
||||
message_listener: A MessageListener implementation.
|
||||
The ScalarMap will call this object's Modified() method when it
|
||||
is modified.
|
||||
key_checker: A type_checkers.ValueChecker instance to run on keys
|
||||
inserted into this container.
|
||||
value_checker: A type_checkers.ValueChecker instance to run on values
|
||||
inserted into this container.
|
||||
entry_descriptor: The MessageDescriptor of a map entry: key and value.
|
||||
"""
|
||||
self._message_listener = message_listener
|
||||
self._message_descriptor = message_descriptor
|
||||
self._key_checker = key_checker
|
||||
self._entry_descriptor = entry_descriptor
|
||||
self._values = {}
|
||||
|
||||
def __getitem__(self, key: _K) -> _V:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
try:
|
||||
return self._values[key]
|
||||
except KeyError:
|
||||
new_element = self._message_descriptor._concrete_class()
|
||||
new_element._SetListener(self._message_listener)
|
||||
self._values[key] = new_element
|
||||
self._message_listener.Modified()
|
||||
return new_element
|
||||
|
||||
def get_or_create(self, key: _K) -> _V:
|
||||
"""get_or_create() is an alias for getitem (ie. map[key]).
|
||||
|
||||
Args:
|
||||
key: The key to get or create in the map.
|
||||
|
||||
This is useful in cases where you want to be explicit that the call is
|
||||
mutating the map. This can avoid lint errors for statements like this
|
||||
that otherwise would appear to be pointless statements:
|
||||
|
||||
msg.my_map[key]
|
||||
"""
|
||||
return self[key]
|
||||
|
||||
@overload
|
||||
def get(self, key: _K) -> Optional[_V]:
|
||||
...
|
||||
|
||||
@overload
|
||||
def get(self, key: _K, default: _T) -> Union[_V, _T]:
|
||||
...
|
||||
|
||||
# We need to override this explicitly, because our defaultdict-like behavior
|
||||
# will make the default implementation (from our base class) always insert
|
||||
# the key.
|
||||
def get(self, key, default=None):
|
||||
if key in self:
|
||||
return self[key]
|
||||
else:
|
||||
return default
|
||||
|
||||
def __contains__(self, item: _K) -> bool:
|
||||
item = self._key_checker.CheckValue(item)
|
||||
return item in self._values
|
||||
|
||||
def __setitem__(self, key: _K, value: _V) -> NoReturn:
|
||||
raise ValueError('May not set values directly, call my_map[key].foo = 5')
|
||||
|
||||
def __delitem__(self, key: _K) -> None:
|
||||
key = self._key_checker.CheckValue(key)
|
||||
del self._values[key]
|
||||
self._message_listener.Modified()
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._values)
|
||||
|
||||
def __iter__(self) -> Iterator[_K]:
|
||||
return iter(self._values)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._values)
|
||||
|
||||
def setdefault(self, key: _K, value: Optional[_V] = None) -> _V:
|
||||
raise NotImplementedError(
|
||||
'Set message map value directly is not supported, call'
|
||||
' my_map[key].foo = 5'
|
||||
)
|
||||
|
||||
def MergeFrom(self, other: 'MessageMap[_K, _V]') -> None:
|
||||
# pylint: disable=protected-access
|
||||
for key in other._values:
|
||||
# According to documentation: "When parsing from the wire or when merging,
|
||||
# if there are duplicate map keys the last key seen is used".
|
||||
if key in self:
|
||||
del self[key]
|
||||
self[key].CopyFrom(other[key])
|
||||
# self._message_listener.Modified() not required here, because
|
||||
# mutations to submessages already propagate.
|
||||
|
||||
def InvalidateIterators(self) -> None:
|
||||
# It appears that the only way to reliably invalidate iterators to
|
||||
# self._values is to ensure that its size changes.
|
||||
original = self._values
|
||||
self._values = original.copy()
|
||||
original[None] = None
|
||||
|
||||
# This is defined in the abstract base, but we can do it much more cheaply.
|
||||
def clear(self) -> None:
|
||||
self._values.clear()
|
||||
self._message_listener.Modified()
|
||||
|
||||
def GetEntryClass(self) -> Any:
|
||||
return self._entry_descriptor._concrete_class
|
||||
|
||||
|
||||
class _UnknownField:
|
||||
"""A parsed unknown field."""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_field_number', '_wire_type', '_data']
|
||||
|
||||
def __init__(self, field_number, wire_type, data):
|
||||
self._field_number = field_number
|
||||
self._wire_type = wire_type
|
||||
self._data = data
|
||||
return
|
||||
|
||||
def __lt__(self, other):
|
||||
# pylint: disable=protected-access
|
||||
return self._field_number < other._field_number
|
||||
|
||||
def __eq__(self, other):
|
||||
if self is other:
|
||||
return True
|
||||
# pylint: disable=protected-access
|
||||
return (self._field_number == other._field_number and
|
||||
self._wire_type == other._wire_type and
|
||||
self._data == other._data)
|
||||
|
||||
|
||||
class UnknownFieldRef: # pylint: disable=missing-class-docstring
|
||||
|
||||
def __init__(self, parent, index):
|
||||
self._parent = parent
|
||||
self._index = index
|
||||
|
||||
def _check_valid(self):
|
||||
if not self._parent:
|
||||
raise ValueError('UnknownField does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
if self._index >= len(self._parent):
|
||||
raise ValueError('UnknownField does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
|
||||
@property
|
||||
def field_number(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._field_number
|
||||
|
||||
@property
|
||||
def wire_type(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._wire_type
|
||||
|
||||
@property
|
||||
def data(self):
|
||||
self._check_valid()
|
||||
# pylint: disable=protected-access
|
||||
return self._parent._internal_get(self._index)._data
|
||||
|
||||
|
||||
class UnknownFieldSet:
|
||||
"""UnknownField container"""
|
||||
|
||||
# Disallows assignment to other attributes.
|
||||
__slots__ = ['_values']
|
||||
|
||||
def __init__(self):
|
||||
self._values = []
|
||||
|
||||
def __getitem__(self, index):
|
||||
if self._values is None:
|
||||
raise ValueError('UnknownFields does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
size = len(self._values)
|
||||
if index < 0:
|
||||
index += size
|
||||
if index < 0 or index >= size:
|
||||
raise IndexError('index %d out of range'.index)
|
||||
|
||||
return UnknownFieldRef(self, index)
|
||||
|
||||
def _internal_get(self, index):
|
||||
return self._values[index]
|
||||
|
||||
def __len__(self):
|
||||
if self._values is None:
|
||||
raise ValueError('UnknownFields does not exist. '
|
||||
'The parent message might be cleared.')
|
||||
return len(self._values)
|
||||
|
||||
def _add(self, field_number, wire_type, data):
|
||||
unknown_field = _UnknownField(field_number, wire_type, data)
|
||||
self._values.append(unknown_field)
|
||||
return unknown_field
|
||||
|
||||
def __iter__(self):
|
||||
for i in range(len(self)):
|
||||
yield UnknownFieldRef(self, i)
|
||||
|
||||
def _extend(self, other):
|
||||
if other is None:
|
||||
return
|
||||
# pylint: disable=protected-access
|
||||
self._values.extend(other._values)
|
||||
|
||||
def __eq__(self, other):
|
||||
if self is other:
|
||||
return True
|
||||
# Sort unknown fields because their order shouldn't
|
||||
# affect equality test.
|
||||
values = list(self._values)
|
||||
if other is None:
|
||||
return not values
|
||||
values.sort()
|
||||
# pylint: disable=protected-access
|
||||
other_values = sorted(other._values)
|
||||
return values == other_values
|
||||
|
||||
def _clear(self):
|
||||
for value in self._values:
|
||||
# pylint: disable=protected-access
|
||||
if isinstance(value._data, UnknownFieldSet):
|
||||
value._data._clear() # pylint: disable=protected-access
|
||||
self._values = None
|
||||
+1066
File diff suppressed because it is too large
Load Diff
+806
@@ -0,0 +1,806 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Code for encoding protocol message primitives.
|
||||
|
||||
Contains the logic for encoding every logical protocol field type
|
||||
into one of the 5 physical wire types.
|
||||
|
||||
This code is designed to push the Python interpreter's performance to the
|
||||
limits.
|
||||
|
||||
The basic idea is that at startup time, for every field (i.e. every
|
||||
FieldDescriptor) we construct two functions: a "sizer" and an "encoder". The
|
||||
sizer takes a value of this field's type and computes its byte size. The
|
||||
encoder takes a writer function and a value. It encodes the value into byte
|
||||
strings and invokes the writer function to write those strings. Typically the
|
||||
writer function is the write() method of a BytesIO.
|
||||
|
||||
We try to do as much work as possible when constructing the writer and the
|
||||
sizer rather than when calling them. In particular:
|
||||
* We copy any needed global functions to local variables, so that we do not need
|
||||
to do costly global table lookups at runtime.
|
||||
* Similarly, we try to do any attribute lookups at startup time if possible.
|
||||
* Every field's tag is encoded to bytes at startup, since it can't change at
|
||||
runtime.
|
||||
* Whatever component of the field size we can compute at startup, we do.
|
||||
* We *avoid* sharing code if doing so would make the code slower and not sharing
|
||||
does not burden us too much. For example, encoders for repeated fields do
|
||||
not just call the encoders for singular fields in a loop because this would
|
||||
add an extra function call overhead for every loop iteration; instead, we
|
||||
manually inline the single-value encoder into the loop.
|
||||
* If a Python function lacks a return statement, Python actually generates
|
||||
instructions to pop the result of the last statement off the stack, push
|
||||
None onto the stack, and then return that. If we really don't care what
|
||||
value is returned, then we can save two instructions by returning the
|
||||
result of the last statement. It looks funny but it helps.
|
||||
* We assume that type and bounds checking has happened at a higher level.
|
||||
"""
|
||||
|
||||
__author__ = 'kenton@google.com (Kenton Varda)'
|
||||
|
||||
import struct
|
||||
|
||||
from google.protobuf.internal import wire_format
|
||||
|
||||
|
||||
# This will overflow and thus become IEEE-754 "infinity". We would use
|
||||
# "float('inf')" but it doesn't work on Windows pre-Python-2.6.
|
||||
_POS_INF = 1e10000
|
||||
_NEG_INF = -_POS_INF
|
||||
|
||||
|
||||
def _VarintSize(value):
|
||||
"""Compute the size of a varint value."""
|
||||
if value <= 0x7f: return 1
|
||||
if value <= 0x3fff: return 2
|
||||
if value <= 0x1fffff: return 3
|
||||
if value <= 0xfffffff: return 4
|
||||
if value <= 0x7ffffffff: return 5
|
||||
if value <= 0x3ffffffffff: return 6
|
||||
if value <= 0x1ffffffffffff: return 7
|
||||
if value <= 0xffffffffffffff: return 8
|
||||
if value <= 0x7fffffffffffffff: return 9
|
||||
return 10
|
||||
|
||||
|
||||
def _SignedVarintSize(value):
|
||||
"""Compute the size of a signed varint value."""
|
||||
if value < 0: return 10
|
||||
if value <= 0x7f: return 1
|
||||
if value <= 0x3fff: return 2
|
||||
if value <= 0x1fffff: return 3
|
||||
if value <= 0xfffffff: return 4
|
||||
if value <= 0x7ffffffff: return 5
|
||||
if value <= 0x3ffffffffff: return 6
|
||||
if value <= 0x1ffffffffffff: return 7
|
||||
if value <= 0xffffffffffffff: return 8
|
||||
if value <= 0x7fffffffffffffff: return 9
|
||||
return 10
|
||||
|
||||
|
||||
def _TagSize(field_number):
|
||||
"""Returns the number of bytes required to serialize a tag with this field
|
||||
number."""
|
||||
# Just pass in type 0, since the type won't affect the tag+type size.
|
||||
return _VarintSize(wire_format.PackTag(field_number, 0))
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# In this section we define some generic sizers. Each of these functions
|
||||
# takes parameters specific to a particular field type, e.g. int32 or fixed64.
|
||||
# It returns another function which in turn takes parameters specific to a
|
||||
# particular field, e.g. the field number and whether it is repeated or packed.
|
||||
# Look at the next section to see how these are used.
|
||||
|
||||
|
||||
def _SimpleSizer(compute_value_size):
|
||||
"""A sizer which uses the function compute_value_size to compute the size of
|
||||
each value. Typically compute_value_size is _VarintSize."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = 0
|
||||
for element in value:
|
||||
result += compute_value_size(element)
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += compute_value_size(element)
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + compute_value_size(value)
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
def _ModifiedSizer(compute_value_size, modify_value):
|
||||
"""Like SimpleSizer, but modify_value is invoked on each value before it is
|
||||
passed to compute_value_size. modify_value is typically ZigZagEncode."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = 0
|
||||
for element in value:
|
||||
result += compute_value_size(modify_value(element))
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += compute_value_size(modify_value(element))
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + compute_value_size(modify_value(value))
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
def _FixedSizer(value_size):
|
||||
"""Like _SimpleSizer except for a fixed-size field. The input is the size
|
||||
of one value."""
|
||||
|
||||
def SpecificSizer(field_number, is_repeated, is_packed):
|
||||
tag_size = _TagSize(field_number)
|
||||
if is_packed:
|
||||
local_VarintSize = _VarintSize
|
||||
def PackedFieldSize(value):
|
||||
result = len(value) * value_size
|
||||
return result + local_VarintSize(result) + tag_size
|
||||
return PackedFieldSize
|
||||
elif is_repeated:
|
||||
element_size = value_size + tag_size
|
||||
def RepeatedFieldSize(value):
|
||||
return len(value) * element_size
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
field_size = value_size + tag_size
|
||||
def FieldSize(value):
|
||||
return field_size
|
||||
return FieldSize
|
||||
|
||||
return SpecificSizer
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# Here we declare a sizer constructor for each field type. Each "sizer
|
||||
# constructor" is a function that takes (field_number, is_repeated, is_packed)
|
||||
# as parameters and returns a sizer, which in turn takes a field value as
|
||||
# a parameter and returns its encoded size.
|
||||
|
||||
|
||||
Int32Sizer = Int64Sizer = EnumSizer = _SimpleSizer(_SignedVarintSize)
|
||||
|
||||
UInt32Sizer = UInt64Sizer = _SimpleSizer(_VarintSize)
|
||||
|
||||
SInt32Sizer = SInt64Sizer = _ModifiedSizer(
|
||||
_SignedVarintSize, wire_format.ZigZagEncode)
|
||||
|
||||
Fixed32Sizer = SFixed32Sizer = FloatSizer = _FixedSizer(4)
|
||||
Fixed64Sizer = SFixed64Sizer = DoubleSizer = _FixedSizer(8)
|
||||
|
||||
BoolSizer = _FixedSizer(1)
|
||||
|
||||
|
||||
def StringSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a string field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = local_len(element.encode('utf-8'))
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = local_len(value.encode('utf-8'))
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
def BytesSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a bytes field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = local_len(element)
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = local_len(value)
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
def GroupSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a group field."""
|
||||
|
||||
tag_size = _TagSize(field_number) * 2
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
result += element.ByteSize()
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
return tag_size + value.ByteSize()
|
||||
return FieldSize
|
||||
|
||||
|
||||
def MessageSizer(field_number, is_repeated, is_packed):
|
||||
"""Returns a sizer for a message field."""
|
||||
|
||||
tag_size = _TagSize(field_number)
|
||||
local_VarintSize = _VarintSize
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def RepeatedFieldSize(value):
|
||||
result = tag_size * len(value)
|
||||
for element in value:
|
||||
l = element.ByteSize()
|
||||
result += local_VarintSize(l) + l
|
||||
return result
|
||||
return RepeatedFieldSize
|
||||
else:
|
||||
def FieldSize(value):
|
||||
l = value.ByteSize()
|
||||
return tag_size + local_VarintSize(l) + l
|
||||
return FieldSize
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# MessageSet is special: it needs custom logic to compute its size properly.
|
||||
|
||||
|
||||
def MessageSetItemSizer(field_number):
|
||||
"""Returns a sizer for extensions of MessageSet.
|
||||
|
||||
The message set message looks like this:
|
||||
message MessageSet {
|
||||
repeated group Item = 1 {
|
||||
required int32 type_id = 2;
|
||||
required string message = 3;
|
||||
}
|
||||
}
|
||||
"""
|
||||
static_size = (_TagSize(1) * 2 + _TagSize(2) + _VarintSize(field_number) +
|
||||
_TagSize(3))
|
||||
local_VarintSize = _VarintSize
|
||||
|
||||
def FieldSize(value):
|
||||
l = value.ByteSize()
|
||||
return static_size + local_VarintSize(l) + l
|
||||
|
||||
return FieldSize
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Map is special: it needs custom logic to compute its size properly.
|
||||
|
||||
|
||||
def MapSizer(field_descriptor, is_message_map):
|
||||
"""Returns a sizer for a map field."""
|
||||
|
||||
# Can't look at field_descriptor.message_type._concrete_class because it may
|
||||
# not have been initialized yet.
|
||||
message_type = field_descriptor.message_type
|
||||
message_sizer = MessageSizer(field_descriptor.number, False, False)
|
||||
|
||||
def FieldSize(map_value):
|
||||
total = 0
|
||||
for key in map_value:
|
||||
value = map_value[key]
|
||||
# It's wasteful to create the messages and throw them away one second
|
||||
# later since we'll do the same for the actual encode. But there's not an
|
||||
# obvious way to avoid this within the current design without tons of code
|
||||
# duplication. For message map, value.ByteSize() should be called to
|
||||
# update the status.
|
||||
entry_msg = message_type._concrete_class(key=key, value=value)
|
||||
total += message_sizer(entry_msg)
|
||||
if is_message_map:
|
||||
value.ByteSize()
|
||||
return total
|
||||
|
||||
return FieldSize
|
||||
|
||||
# ====================================================================
|
||||
# Encoders!
|
||||
|
||||
|
||||
def _VarintEncoder():
|
||||
"""Return an encoder for a basic varint value (does not include tag)."""
|
||||
|
||||
local_int2byte = struct.Struct('>B').pack
|
||||
|
||||
def EncodeVarint(write, value, unused_deterministic=None):
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
while value:
|
||||
write(local_int2byte(0x80|bits))
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
return write(local_int2byte(bits))
|
||||
|
||||
return EncodeVarint
|
||||
|
||||
|
||||
def _SignedVarintEncoder():
|
||||
"""Return an encoder for a basic signed varint value (does not include
|
||||
tag)."""
|
||||
|
||||
local_int2byte = struct.Struct('>B').pack
|
||||
|
||||
def EncodeSignedVarint(write, value, unused_deterministic=None):
|
||||
if value < 0:
|
||||
value += (1 << 64)
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
while value:
|
||||
write(local_int2byte(0x80|bits))
|
||||
bits = value & 0x7f
|
||||
value >>= 7
|
||||
return write(local_int2byte(bits))
|
||||
|
||||
return EncodeSignedVarint
|
||||
|
||||
|
||||
_EncodeVarint = _VarintEncoder()
|
||||
_EncodeSignedVarint = _SignedVarintEncoder()
|
||||
|
||||
|
||||
def _VarintBytes(value):
|
||||
"""Encode the given integer as a varint and return the bytes. This is only
|
||||
called at startup time so it doesn't need to be fast."""
|
||||
|
||||
pieces = []
|
||||
_EncodeVarint(pieces.append, value, True)
|
||||
return b"".join(pieces)
|
||||
|
||||
|
||||
def TagBytes(field_number, wire_type):
|
||||
"""Encode the given tag and return the bytes. Only called at startup."""
|
||||
|
||||
return bytes(_VarintBytes(wire_format.PackTag(field_number, wire_type)))
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As with sizers (see above), we have a number of common encoder
|
||||
# implementations.
|
||||
|
||||
|
||||
def _SimpleEncoder(wire_type, encode_value, compute_value_size):
|
||||
"""Return a constructor for an encoder for fields of a particular type.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
encode_value: A function which encodes an individual value, e.g.
|
||||
_EncodeVarint().
|
||||
compute_value_size: A function which computes the size of an individual
|
||||
value, e.g. _VarintSize().
|
||||
"""
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
size = 0
|
||||
for element in value:
|
||||
size += compute_value_size(element)
|
||||
local_EncodeVarint(write, size, deterministic)
|
||||
for element in value:
|
||||
encode_value(write, element, deterministic)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
encode_value(write, element, deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
return encode_value(write, value, deterministic)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value):
|
||||
"""Like SimpleEncoder but additionally invokes modify_value on every value
|
||||
before passing it to encode_value. Usually modify_value is ZigZagEncode."""
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
size = 0
|
||||
for element in value:
|
||||
size += compute_value_size(modify_value(element))
|
||||
local_EncodeVarint(write, size, deterministic)
|
||||
for element in value:
|
||||
encode_value(write, modify_value(element), deterministic)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
encode_value(write, modify_value(element), deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
return encode_value(write, modify_value(value), deterministic)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _StructPackEncoder(wire_type, format):
|
||||
"""Return a constructor for an encoder for a fixed-width field.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
format: The format string to pass to struct.pack().
|
||||
"""
|
||||
|
||||
value_size = struct.calcsize(format)
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
local_struct_pack = struct.pack
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value) * value_size, deterministic)
|
||||
for element in value:
|
||||
write(local_struct_pack(format, element))
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
write(local_struct_pack(format, element))
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
return write(local_struct_pack(format, value))
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
def _FloatingPointEncoder(wire_type, format):
|
||||
"""Return a constructor for an encoder for float fields.
|
||||
|
||||
This is like StructPackEncoder, but catches errors that may be due to
|
||||
passing non-finite floating-point values to struct.pack, and makes a
|
||||
second attempt to encode those values.
|
||||
|
||||
Args:
|
||||
wire_type: The field's wire type, for encoding tags.
|
||||
format: The format string to pass to struct.pack().
|
||||
"""
|
||||
|
||||
value_size = struct.calcsize(format)
|
||||
if value_size == 4:
|
||||
def EncodeNonFiniteOrRaise(write, value):
|
||||
# Remember that the serialized form uses little-endian byte order.
|
||||
if value == _POS_INF:
|
||||
write(b'\x00\x00\x80\x7F')
|
||||
elif value == _NEG_INF:
|
||||
write(b'\x00\x00\x80\xFF')
|
||||
elif value != value: # NaN
|
||||
write(b'\x00\x00\xC0\x7F')
|
||||
else:
|
||||
raise
|
||||
elif value_size == 8:
|
||||
def EncodeNonFiniteOrRaise(write, value):
|
||||
if value == _POS_INF:
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F')
|
||||
elif value == _NEG_INF:
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF')
|
||||
elif value != value: # NaN
|
||||
write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F')
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
raise ValueError('Can\'t encode floating-point values that are '
|
||||
'%d bytes long (only 4 or 8)' % value_size)
|
||||
|
||||
def SpecificEncoder(field_number, is_repeated, is_packed):
|
||||
local_struct_pack = struct.pack
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value) * value_size, deterministic)
|
||||
for element in value:
|
||||
# This try/except block is going to be faster than any code that
|
||||
# we could write to check whether element is finite.
|
||||
try:
|
||||
write(local_struct_pack(format, element))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, element)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
try:
|
||||
write(local_struct_pack(format, element))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, element)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_type)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
try:
|
||||
write(local_struct_pack(format, value))
|
||||
except SystemError:
|
||||
EncodeNonFiniteOrRaise(write, value)
|
||||
return EncodeField
|
||||
|
||||
return SpecificEncoder
|
||||
|
||||
|
||||
# ====================================================================
|
||||
# Here we declare an encoder constructor for each field type. These work
|
||||
# very similarly to sizer constructors, described earlier.
|
||||
|
||||
|
||||
Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize)
|
||||
|
||||
UInt32Encoder = UInt64Encoder = _SimpleEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize)
|
||||
|
||||
SInt32Encoder = SInt64Encoder = _ModifiedEncoder(
|
||||
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize,
|
||||
wire_format.ZigZagEncode)
|
||||
|
||||
# Note that Python conveniently guarantees that when using the '<' prefix on
|
||||
# formats, they will also have the same size across all platforms (as opposed
|
||||
# to without the prefix, where their sizes depend on the C compiler's basic
|
||||
# type sizes).
|
||||
Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<I')
|
||||
Fixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<Q')
|
||||
SFixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<i')
|
||||
SFixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<q')
|
||||
FloatEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED32, '<f')
|
||||
DoubleEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED64, '<d')
|
||||
|
||||
|
||||
def BoolEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a boolean field."""
|
||||
|
||||
false_byte = b'\x00'
|
||||
true_byte = b'\x01'
|
||||
if is_packed:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
def EncodePackedField(write, value, deterministic):
|
||||
write(tag_bytes)
|
||||
local_EncodeVarint(write, len(value), deterministic)
|
||||
for element in value:
|
||||
if element:
|
||||
write(true_byte)
|
||||
else:
|
||||
write(false_byte)
|
||||
return EncodePackedField
|
||||
elif is_repeated:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
|
||||
def EncodeRepeatedField(write, value, unused_deterministic=None):
|
||||
for element in value:
|
||||
write(tag_bytes)
|
||||
if element:
|
||||
write(true_byte)
|
||||
else:
|
||||
write(false_byte)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
|
||||
def EncodeField(write, value, unused_deterministic=None):
|
||||
write(tag_bytes)
|
||||
if value:
|
||||
return write(true_byte)
|
||||
return write(false_byte)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def StringEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a string field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
encoded = element.encode('utf-8')
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(encoded), deterministic)
|
||||
write(encoded)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
encoded = value.encode('utf-8')
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(encoded), deterministic)
|
||||
return write(encoded)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def BytesEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a bytes field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
local_len = len
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(element), deterministic)
|
||||
write(element)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag)
|
||||
local_EncodeVarint(write, local_len(value), deterministic)
|
||||
return write(value)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def GroupEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a group field."""
|
||||
|
||||
start_tag = TagBytes(field_number, wire_format.WIRETYPE_START_GROUP)
|
||||
end_tag = TagBytes(field_number, wire_format.WIRETYPE_END_GROUP)
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(start_tag)
|
||||
element._InternalSerialize(write, deterministic)
|
||||
write(end_tag)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(start_tag)
|
||||
value._InternalSerialize(write, deterministic)
|
||||
return write(end_tag)
|
||||
return EncodeField
|
||||
|
||||
|
||||
def MessageEncoder(field_number, is_repeated, is_packed):
|
||||
"""Returns an encoder for a message field."""
|
||||
|
||||
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
assert not is_packed
|
||||
if is_repeated:
|
||||
def EncodeRepeatedField(write, value, deterministic):
|
||||
for element in value:
|
||||
write(tag)
|
||||
local_EncodeVarint(write, element.ByteSize(), deterministic)
|
||||
element._InternalSerialize(write, deterministic)
|
||||
return EncodeRepeatedField
|
||||
else:
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(tag)
|
||||
local_EncodeVarint(write, value.ByteSize(), deterministic)
|
||||
return value._InternalSerialize(write, deterministic)
|
||||
return EncodeField
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As before, MessageSet is special.
|
||||
|
||||
|
||||
def MessageSetItemEncoder(field_number):
|
||||
"""Encoder for extensions of MessageSet.
|
||||
|
||||
The message set message looks like this:
|
||||
message MessageSet {
|
||||
repeated group Item = 1 {
|
||||
required int32 type_id = 2;
|
||||
required string message = 3;
|
||||
}
|
||||
}
|
||||
"""
|
||||
start_bytes = b"".join([
|
||||
TagBytes(1, wire_format.WIRETYPE_START_GROUP),
|
||||
TagBytes(2, wire_format.WIRETYPE_VARINT),
|
||||
_VarintBytes(field_number),
|
||||
TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)])
|
||||
end_bytes = TagBytes(1, wire_format.WIRETYPE_END_GROUP)
|
||||
local_EncodeVarint = _EncodeVarint
|
||||
|
||||
def EncodeField(write, value, deterministic):
|
||||
write(start_bytes)
|
||||
local_EncodeVarint(write, value.ByteSize(), deterministic)
|
||||
value._InternalSerialize(write, deterministic)
|
||||
return write(end_bytes)
|
||||
|
||||
return EncodeField
|
||||
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# As before, Map is special.
|
||||
|
||||
|
||||
def MapEncoder(field_descriptor):
|
||||
"""Encoder for extensions of MessageSet.
|
||||
|
||||
Maps always have a wire format like this:
|
||||
message MapEntry {
|
||||
key_type key = 1;
|
||||
value_type value = 2;
|
||||
}
|
||||
repeated MapEntry map = N;
|
||||
"""
|
||||
# Can't look at field_descriptor.message_type._concrete_class because it may
|
||||
# not have been initialized yet.
|
||||
message_type = field_descriptor.message_type
|
||||
encode_message = MessageEncoder(field_descriptor.number, False, False)
|
||||
|
||||
def EncodeField(write, value, deterministic):
|
||||
value_keys = sorted(value.keys()) if deterministic else value
|
||||
for key in value_keys:
|
||||
entry_msg = message_type._concrete_class(key=key, value=value[key])
|
||||
encode_message(write, entry_msg, deterministic)
|
||||
|
||||
return EncodeField
|
||||
+112
@@ -0,0 +1,112 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""A simple wrapper around enum types to expose utility functions.
|
||||
|
||||
Instances are created as properties with the same name as the enum they wrap
|
||||
on proto classes. For usage, see:
|
||||
reflection_test.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
__author__ = 'rabsatt@google.com (Kevin Rabsatt)'
|
||||
|
||||
|
||||
class EnumTypeWrapper(object):
|
||||
"""A utility for finding the names of enum values."""
|
||||
|
||||
DESCRIPTOR = None
|
||||
|
||||
# This is a type alias, which mypy typing stubs can type as
|
||||
# a genericized parameter constrained to an int, allowing subclasses
|
||||
# to be typed with more constraint in .pyi stubs
|
||||
# Eg.
|
||||
# def MyGeneratedEnum(Message):
|
||||
# ValueType = NewType('ValueType', int)
|
||||
# def Name(self, number: MyGeneratedEnum.ValueType) -> str
|
||||
ValueType = int
|
||||
|
||||
def __init__(self, enum_type):
|
||||
"""Inits EnumTypeWrapper with an EnumDescriptor."""
|
||||
self._enum_type = enum_type
|
||||
self.DESCRIPTOR = enum_type # pylint: disable=invalid-name
|
||||
|
||||
def Name(self, number): # pylint: disable=invalid-name
|
||||
"""Returns a string containing the name of an enum value."""
|
||||
try:
|
||||
return self._enum_type.values_by_number[number].name
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
|
||||
if not isinstance(number, int):
|
||||
raise TypeError(
|
||||
'Enum value for {} must be an int, but got {} {!r}.'.format(
|
||||
self._enum_type.name, type(number), number))
|
||||
else:
|
||||
# repr here to handle the odd case when you pass in a boolean.
|
||||
raise ValueError('Enum {} has no name defined for value {!r}'.format(
|
||||
self._enum_type.name, number))
|
||||
|
||||
def Value(self, name): # pylint: disable=invalid-name
|
||||
"""Returns the value corresponding to the given enum name."""
|
||||
try:
|
||||
return self._enum_type.values_by_name[name].number
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
raise ValueError('Enum {} has no value defined for name {!r}'.format(
|
||||
self._enum_type.name, name))
|
||||
|
||||
def keys(self):
|
||||
"""Return a list of the string names in the enum.
|
||||
|
||||
Returns:
|
||||
A list of strs, in the order they were defined in the .proto file.
|
||||
"""
|
||||
|
||||
return [value_descriptor.name
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def values(self):
|
||||
"""Return a list of the integer values in the enum.
|
||||
|
||||
Returns:
|
||||
A list of ints, in the order they were defined in the .proto file.
|
||||
"""
|
||||
|
||||
return [value_descriptor.number
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def items(self):
|
||||
"""Return a list of the (name, value) pairs of the enum.
|
||||
|
||||
Returns:
|
||||
A list of (str, int) pairs, in the order they were defined
|
||||
in the .proto file.
|
||||
"""
|
||||
return [(value_descriptor.name, value_descriptor.number)
|
||||
for value_descriptor in self._enum_type.values]
|
||||
|
||||
def __getattr__(self, name):
|
||||
"""Returns the value corresponding to the given enum name."""
|
||||
try:
|
||||
return super(
|
||||
EnumTypeWrapper,
|
||||
self).__getattribute__('_enum_type').values_by_name[name].number
|
||||
except KeyError:
|
||||
pass # fall out to break exception chaining
|
||||
raise AttributeError('Enum {} has no value defined for name {!r}'.format(
|
||||
self._enum_type.name, name))
|
||||
|
||||
def __or__(self, other):
|
||||
"""Returns the union type of self and other."""
|
||||
if sys.version_info >= (3, 10):
|
||||
return type(self) | other
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'You may not use | on EnumTypes (or classes) below python 3.10'
|
||||
)
|
||||
+194
@@ -0,0 +1,194 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains _ExtensionDict class to represent extensions.
|
||||
"""
|
||||
|
||||
from google.protobuf.internal import type_checkers
|
||||
from google.protobuf.descriptor import FieldDescriptor
|
||||
|
||||
|
||||
def _VerifyExtensionHandle(message, extension_handle):
|
||||
"""Verify that the given extension handle is valid."""
|
||||
|
||||
if not isinstance(extension_handle, FieldDescriptor):
|
||||
raise KeyError('HasExtension() expects an extension handle, got: %s' %
|
||||
extension_handle)
|
||||
|
||||
if not extension_handle.is_extension:
|
||||
raise KeyError('"%s" is not an extension.' % extension_handle.full_name)
|
||||
|
||||
if not extension_handle.containing_type:
|
||||
raise KeyError('"%s" is missing a containing_type.'
|
||||
% extension_handle.full_name)
|
||||
|
||||
if extension_handle.containing_type is not message.DESCRIPTOR:
|
||||
raise KeyError('Extension "%s" extends message type "%s", but this '
|
||||
'message is of type "%s".' %
|
||||
(extension_handle.full_name,
|
||||
extension_handle.containing_type.full_name,
|
||||
message.DESCRIPTOR.full_name))
|
||||
|
||||
|
||||
# TODO: Unify error handling of "unknown extension" crap.
|
||||
# TODO: Support iteritems()-style iteration over all
|
||||
# extensions with the "has" bits turned on?
|
||||
class _ExtensionDict(object):
|
||||
|
||||
"""Dict-like container for Extension fields on proto instances.
|
||||
|
||||
Note that in all cases we expect extension handles to be
|
||||
FieldDescriptors.
|
||||
"""
|
||||
|
||||
def __init__(self, extended_message):
|
||||
"""
|
||||
Args:
|
||||
extended_message: Message instance for which we are the Extensions dict.
|
||||
"""
|
||||
self._extended_message = extended_message
|
||||
|
||||
def __getitem__(self, extension_handle):
|
||||
"""Returns the current value of the given extension handle."""
|
||||
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
result = self._extended_message._fields.get(extension_handle)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
if extension_handle.is_repeated:
|
||||
result = extension_handle._default_constructor(self._extended_message)
|
||||
elif extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
message_type = extension_handle.message_type
|
||||
if not hasattr(message_type, '_concrete_class'):
|
||||
# pylint: disable=g-import-not-at-top
|
||||
from google.protobuf import message_factory
|
||||
message_factory.GetMessageClass(message_type)
|
||||
if not hasattr(extension_handle.message_type, '_concrete_class'):
|
||||
from google.protobuf import message_factory
|
||||
message_factory.GetMessageClass(extension_handle.message_type)
|
||||
result = extension_handle.message_type._concrete_class()
|
||||
try:
|
||||
result._SetListener(self._extended_message._listener_for_children)
|
||||
except ReferenceError:
|
||||
pass
|
||||
else:
|
||||
# Singular scalar -- just return the default without inserting into the
|
||||
# dict.
|
||||
return extension_handle.default_value
|
||||
|
||||
# Atomically check if another thread has preempted us and, if not, swap
|
||||
# in the new object we just created. If someone has preempted us, we
|
||||
# take that object and discard ours.
|
||||
# WARNING: We are relying on setdefault() being atomic. This is true
|
||||
# in CPython but we haven't investigated others. This warning appears
|
||||
# in several other locations in this file.
|
||||
result = self._extended_message._fields.setdefault(
|
||||
extension_handle, result)
|
||||
|
||||
return result
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, self.__class__):
|
||||
return False
|
||||
|
||||
my_fields = self._extended_message.ListFields()
|
||||
other_fields = other._extended_message.ListFields()
|
||||
|
||||
# Get rid of non-extension fields.
|
||||
my_fields = [field for field in my_fields if field.is_extension]
|
||||
other_fields = [field for field in other_fields if field.is_extension]
|
||||
|
||||
return my_fields == other_fields
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
|
||||
def __len__(self):
|
||||
fields = self._extended_message.ListFields()
|
||||
# Get rid of non-extension fields.
|
||||
extension_fields = [field for field in fields if field[0].is_extension]
|
||||
return len(extension_fields)
|
||||
|
||||
def __hash__(self):
|
||||
raise TypeError('unhashable object')
|
||||
|
||||
# Note that this is only meaningful for non-repeated, scalar extension
|
||||
# fields. Note also that we may have to call _Modified() when we do
|
||||
# successfully set a field this way, to set any necessary "has" bits in the
|
||||
# ancestors of the extended message.
|
||||
def __setitem__(self, extension_handle, value):
|
||||
"""If extension_handle specifies a non-repeated, scalar extension
|
||||
field, sets the value of that field.
|
||||
"""
|
||||
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
if (extension_handle.is_repeated or
|
||||
extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE):
|
||||
raise TypeError(
|
||||
'Cannot assign to extension "%s" because it is a repeated or '
|
||||
'composite type.' % extension_handle.full_name)
|
||||
|
||||
# It's slightly wasteful to lookup the type checker each time,
|
||||
# but we expect this to be a vanishingly uncommon case anyway.
|
||||
type_checker = type_checkers.GetTypeChecker(extension_handle)
|
||||
# pylint: disable=protected-access
|
||||
self._extended_message._fields[extension_handle] = (
|
||||
type_checker.CheckValue(value))
|
||||
self._extended_message._Modified()
|
||||
|
||||
def __delitem__(self, extension_handle):
|
||||
self._extended_message.ClearExtension(extension_handle)
|
||||
|
||||
def _FindExtensionByName(self, name):
|
||||
"""Tries to find a known extension with the specified name.
|
||||
|
||||
Args:
|
||||
name: Extension full name.
|
||||
|
||||
Returns:
|
||||
Extension field descriptor.
|
||||
"""
|
||||
descriptor = self._extended_message.DESCRIPTOR
|
||||
extensions = descriptor.file.pool._extensions_by_name[descriptor]
|
||||
return extensions.get(name, None)
|
||||
|
||||
def _FindExtensionByNumber(self, number):
|
||||
"""Tries to find a known extension with the field number.
|
||||
|
||||
Args:
|
||||
number: Extension field number.
|
||||
|
||||
Returns:
|
||||
Extension field descriptor.
|
||||
"""
|
||||
descriptor = self._extended_message.DESCRIPTOR
|
||||
extensions = descriptor.file.pool._extensions_by_number[descriptor]
|
||||
return extensions.get(number, None)
|
||||
|
||||
def __iter__(self):
|
||||
# Return a generator over the populated extension fields
|
||||
return (f[0] for f in self._extended_message.ListFields()
|
||||
if f[0].is_extension)
|
||||
|
||||
def __contains__(self, extension_handle):
|
||||
_VerifyExtensionHandle(self._extended_message, extension_handle)
|
||||
|
||||
if extension_handle not in self._extended_message._fields:
|
||||
return False
|
||||
|
||||
if extension_handle.is_repeated:
|
||||
return bool(self._extended_message._fields.get(extension_handle))
|
||||
|
||||
if extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
value = self._extended_message._fields.get(extension_handle)
|
||||
# pylint: disable=protected-access
|
||||
return value is not None and value._is_present_in_parent
|
||||
|
||||
return True
|
||||
+312
@@ -0,0 +1,312 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains FieldMask class."""
|
||||
|
||||
from google.protobuf.descriptor import FieldDescriptor
|
||||
|
||||
|
||||
class FieldMask(object):
|
||||
"""Class for FieldMask message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts FieldMask to string according to proto3 JSON spec."""
|
||||
camelcase_paths = []
|
||||
for path in self.paths:
|
||||
camelcase_paths.append(_SnakeCaseToCamelCase(path))
|
||||
return ','.join(camelcase_paths)
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Converts string to FieldMask according to proto3 JSON spec."""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('FieldMask JSON value not a string: {!r}'.format(value))
|
||||
self.Clear()
|
||||
if value:
|
||||
for path in value.split(','):
|
||||
self.paths.append(_CamelCaseToSnakeCase(path))
|
||||
|
||||
def IsValidForDescriptor(self, message_descriptor):
|
||||
"""Checks whether the FieldMask is valid for Message Descriptor."""
|
||||
for path in self.paths:
|
||||
if not _IsValidPath(message_descriptor, path):
|
||||
return False
|
||||
return True
|
||||
|
||||
def AllFieldsFromDescriptor(self, message_descriptor):
|
||||
"""Gets all direct fields of Message Descriptor to FieldMask."""
|
||||
self.Clear()
|
||||
for field in message_descriptor.fields:
|
||||
self.paths.append(field.name)
|
||||
|
||||
def CanonicalFormFromMask(self, mask):
|
||||
"""Converts a FieldMask to the canonical form.
|
||||
|
||||
Removes paths that are covered by another path. For example,
|
||||
"foo.bar" is covered by "foo" and will be removed if "foo"
|
||||
is also in the FieldMask. Then sorts all paths in alphabetical order.
|
||||
|
||||
Args:
|
||||
mask: The original FieldMask to be converted.
|
||||
"""
|
||||
tree = _FieldMaskTree(mask)
|
||||
tree.ToFieldMask(self)
|
||||
|
||||
def Union(self, mask1, mask2):
|
||||
"""Merges mask1 and mask2 into this FieldMask."""
|
||||
_CheckFieldMaskMessage(mask1)
|
||||
_CheckFieldMaskMessage(mask2)
|
||||
tree = _FieldMaskTree(mask1)
|
||||
tree.MergeFromFieldMask(mask2)
|
||||
tree.ToFieldMask(self)
|
||||
|
||||
def Intersect(self, mask1, mask2):
|
||||
"""Intersects mask1 and mask2 into this FieldMask."""
|
||||
_CheckFieldMaskMessage(mask1)
|
||||
_CheckFieldMaskMessage(mask2)
|
||||
tree = _FieldMaskTree(mask1)
|
||||
intersection = _FieldMaskTree()
|
||||
for path in mask2.paths:
|
||||
tree.IntersectPath(path, intersection)
|
||||
intersection.ToFieldMask(self)
|
||||
|
||||
def MergeMessage(
|
||||
self, source, destination,
|
||||
replace_message_field=False, replace_repeated_field=False):
|
||||
"""Merges fields specified in FieldMask from source to destination.
|
||||
|
||||
Args:
|
||||
source: Source message.
|
||||
destination: The destination message to be merged into.
|
||||
replace_message_field: Replace message field if True. Merge message
|
||||
field if False.
|
||||
replace_repeated_field: Replace repeated field if True. Append
|
||||
elements of repeated field if False.
|
||||
"""
|
||||
tree = _FieldMaskTree(self)
|
||||
tree.MergeMessage(
|
||||
source, destination, replace_message_field, replace_repeated_field)
|
||||
|
||||
|
||||
def _IsValidPath(message_descriptor, path):
|
||||
"""Checks whether the path is valid for Message Descriptor."""
|
||||
parts = path.split('.')
|
||||
last = parts.pop()
|
||||
for name in parts:
|
||||
field = message_descriptor.fields_by_name.get(name)
|
||||
if (field is None or
|
||||
field.is_repeated or
|
||||
field.type != FieldDescriptor.TYPE_MESSAGE):
|
||||
return False
|
||||
message_descriptor = field.message_type
|
||||
return last in message_descriptor.fields_by_name
|
||||
|
||||
|
||||
def _CheckFieldMaskMessage(message):
|
||||
"""Raises ValueError if message is not a FieldMask."""
|
||||
message_descriptor = message.DESCRIPTOR
|
||||
if (message_descriptor.name != 'FieldMask' or
|
||||
message_descriptor.file.name != 'google/protobuf/field_mask.proto'):
|
||||
raise ValueError('Message {0} is not a FieldMask.'.format(
|
||||
message_descriptor.full_name))
|
||||
|
||||
|
||||
def _SnakeCaseToCamelCase(path_name):
|
||||
"""Converts a path name from snake_case to camelCase."""
|
||||
result = []
|
||||
after_underscore = False
|
||||
for c in path_name:
|
||||
if c.isupper():
|
||||
raise ValueError(
|
||||
'Fail to print FieldMask to Json string: Path name '
|
||||
'{0} must not contain uppercase letters.'.format(path_name))
|
||||
if after_underscore:
|
||||
if c.islower():
|
||||
result.append(c.upper())
|
||||
after_underscore = False
|
||||
else:
|
||||
raise ValueError(
|
||||
'Fail to print FieldMask to Json string: The '
|
||||
'character after a "_" must be a lowercase letter '
|
||||
'in path name {0}.'.format(path_name))
|
||||
elif c == '_':
|
||||
after_underscore = True
|
||||
else:
|
||||
result += c
|
||||
|
||||
if after_underscore:
|
||||
raise ValueError('Fail to print FieldMask to Json string: Trailing "_" '
|
||||
'in path name {0}.'.format(path_name))
|
||||
return ''.join(result)
|
||||
|
||||
|
||||
def _CamelCaseToSnakeCase(path_name):
|
||||
"""Converts a field name from camelCase to snake_case."""
|
||||
result = []
|
||||
for c in path_name:
|
||||
if c == '_':
|
||||
raise ValueError('Fail to parse FieldMask: Path name '
|
||||
'{0} must not contain "_"s.'.format(path_name))
|
||||
if c.isupper():
|
||||
result += '_'
|
||||
result += c.lower()
|
||||
else:
|
||||
result += c
|
||||
return ''.join(result)
|
||||
|
||||
|
||||
class _FieldMaskTree(object):
|
||||
"""Represents a FieldMask in a tree structure.
|
||||
|
||||
For example, given a FieldMask "foo.bar,foo.baz,bar.baz",
|
||||
the FieldMaskTree will be:
|
||||
[_root] -+- foo -+- bar
|
||||
| |
|
||||
| +- baz
|
||||
|
|
||||
+- bar --- baz
|
||||
In the tree, each leaf node represents a field path.
|
||||
"""
|
||||
|
||||
__slots__ = ('_root',)
|
||||
|
||||
def __init__(self, field_mask=None):
|
||||
"""Initializes the tree by FieldMask."""
|
||||
self._root = {}
|
||||
if field_mask:
|
||||
self.MergeFromFieldMask(field_mask)
|
||||
|
||||
def MergeFromFieldMask(self, field_mask):
|
||||
"""Merges a FieldMask to the tree."""
|
||||
for path in field_mask.paths:
|
||||
self.AddPath(path)
|
||||
|
||||
def AddPath(self, path):
|
||||
"""Adds a field path into the tree.
|
||||
|
||||
If the field path to add is a sub-path of an existing field path
|
||||
in the tree (i.e., a leaf node), it means the tree already matches
|
||||
the given path so nothing will be added to the tree. If the path
|
||||
matches an existing non-leaf node in the tree, that non-leaf node
|
||||
will be turned into a leaf node with all its children removed because
|
||||
the path matches all the node's children. Otherwise, a new path will
|
||||
be added.
|
||||
|
||||
Args:
|
||||
path: The field path to add.
|
||||
"""
|
||||
node = self._root
|
||||
for name in path.split('.'):
|
||||
if name not in node:
|
||||
node[name] = {}
|
||||
elif not node[name]:
|
||||
# Pre-existing empty node implies we already have this entire tree.
|
||||
return
|
||||
node = node[name]
|
||||
# Remove any sub-trees we might have had.
|
||||
node.clear()
|
||||
|
||||
def ToFieldMask(self, field_mask):
|
||||
"""Converts the tree to a FieldMask."""
|
||||
field_mask.Clear()
|
||||
_AddFieldPaths(self._root, '', field_mask)
|
||||
|
||||
def IntersectPath(self, path, intersection):
|
||||
"""Calculates the intersection part of a field path with this tree.
|
||||
|
||||
Args:
|
||||
path: The field path to calculates.
|
||||
intersection: The out tree to record the intersection part.
|
||||
"""
|
||||
node = self._root
|
||||
for name in path.split('.'):
|
||||
if name not in node:
|
||||
return
|
||||
elif not node[name]:
|
||||
intersection.AddPath(path)
|
||||
return
|
||||
node = node[name]
|
||||
intersection.AddLeafNodes(path, node)
|
||||
|
||||
def AddLeafNodes(self, prefix, node):
|
||||
"""Adds leaf nodes begin with prefix to this tree."""
|
||||
if not node:
|
||||
self.AddPath(prefix)
|
||||
for name in node:
|
||||
child_path = prefix + '.' + name
|
||||
self.AddLeafNodes(child_path, node[name])
|
||||
|
||||
def MergeMessage(
|
||||
self, source, destination,
|
||||
replace_message, replace_repeated):
|
||||
"""Merge all fields specified by this tree from source to destination."""
|
||||
_MergeMessage(
|
||||
self._root, source, destination, replace_message, replace_repeated)
|
||||
|
||||
|
||||
def _StrConvert(value):
|
||||
"""Converts value to str if it is not."""
|
||||
# This file is imported by c extension and some methods like ClearField
|
||||
# requires string for the field name. py2/py3 has different text
|
||||
# type and may use unicode.
|
||||
if not isinstance(value, str):
|
||||
return value.encode('utf-8')
|
||||
return value
|
||||
|
||||
|
||||
def _MergeMessage(
|
||||
node, source, destination, replace_message, replace_repeated):
|
||||
"""Merge all fields specified by a sub-tree from source to destination."""
|
||||
source_descriptor = source.DESCRIPTOR
|
||||
for name in node:
|
||||
child = node[name]
|
||||
field = source_descriptor.fields_by_name[name]
|
||||
if field is None:
|
||||
raise ValueError('Error: Can\'t find field {0} in message {1}.'.format(
|
||||
name, source_descriptor.full_name))
|
||||
if child:
|
||||
# Sub-paths are only allowed for singular message fields.
|
||||
if (field.is_repeated or
|
||||
field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE):
|
||||
raise ValueError('Error: Field {0} in message {1} is not a singular '
|
||||
'message field and cannot have sub-fields.'.format(
|
||||
name, source_descriptor.full_name))
|
||||
if source.HasField(name):
|
||||
_MergeMessage(
|
||||
child, getattr(source, name), getattr(destination, name),
|
||||
replace_message, replace_repeated)
|
||||
continue
|
||||
if field.is_repeated:
|
||||
if replace_repeated:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
repeated_source = getattr(source, name)
|
||||
repeated_destination = getattr(destination, name)
|
||||
repeated_destination.MergeFrom(repeated_source)
|
||||
else:
|
||||
if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
|
||||
if replace_message:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
if source.HasField(name):
|
||||
getattr(destination, name).MergeFrom(getattr(source, name))
|
||||
elif not field.has_presence or source.HasField(name):
|
||||
setattr(destination, name, getattr(source, name))
|
||||
else:
|
||||
destination.ClearField(_StrConvert(name))
|
||||
|
||||
|
||||
def _AddFieldPaths(node, prefix, field_mask):
|
||||
"""Adds the field paths descended from node to field_mask."""
|
||||
if not node and prefix:
|
||||
field_mask.paths.append(prefix)
|
||||
return
|
||||
for name in sorted(node):
|
||||
if prefix:
|
||||
child_path = prefix + '.' + name
|
||||
else:
|
||||
child_path = name
|
||||
_AddFieldPaths(node[name], child_path, field_mask)
|
||||
+55
@@ -0,0 +1,55 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Defines a listener interface for observing certain
|
||||
state transitions on Message objects.
|
||||
|
||||
Also defines a null implementation of this interface.
|
||||
"""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
|
||||
class MessageListener(object):
|
||||
|
||||
"""Listens for modifications made to a message. Meant to be registered via
|
||||
Message._SetListener().
|
||||
|
||||
Attributes:
|
||||
dirty: If True, then calling Modified() would be a no-op. This can be
|
||||
used to avoid these calls entirely in the common case.
|
||||
"""
|
||||
|
||||
def Modified(self):
|
||||
"""Called every time the message is modified in such a way that the parent
|
||||
message may need to be updated. This currently means either:
|
||||
(a) The message was modified for the first time, so the parent message
|
||||
should henceforth mark the message as present.
|
||||
(b) The message's cached byte size became dirty -- i.e. the message was
|
||||
modified for the first time after a previous call to ByteSize().
|
||||
Therefore the parent should also mark its byte size as dirty.
|
||||
Note that (a) implies (b), since new objects start out with a client cached
|
||||
size (zero). However, we document (a) explicitly because it is important.
|
||||
|
||||
Modified() will *only* be called in response to one of these two events --
|
||||
not every time the sub-message is modified.
|
||||
|
||||
Note that if the listener's |dirty| attribute is true, then calling
|
||||
Modified at the moment would be a no-op, so it can be skipped. Performance-
|
||||
sensitive callers should check this attribute directly before calling since
|
||||
it will be true most of the time.
|
||||
"""
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class NullMessageListener(object):
|
||||
|
||||
"""No-op MessageListener implementation."""
|
||||
|
||||
def Modified(self):
|
||||
pass
|
||||
Executable
+5
@@ -0,0 +1,5 @@
|
||||
"""
|
||||
This file contains the serialized FeatureSetDefaults object corresponding to
|
||||
the Pure Python runtime. This is used for feature resolution under Editions.
|
||||
"""
|
||||
_PROTOBUF_INTERNAL_PYTHON_EDITION_DEFAULTS = b"\n\027\030\204\007\"\000*\020\010\001\020\002\030\002 \003(\0010\0028\002@\001\n\027\030\347\007\"\000*\020\010\002\020\001\030\001 \002(\0010\0018\002@\001\n\027\030\350\007\"\014\010\001\020\001\030\001 \002(\0010\001*\0048\002@\001\n\027\030\351\007\"\020\010\001\020\001\030\001 \002(\0010\0018\001@\002*\000 \346\007(\351\007"
|
||||
+1599
File diff suppressed because it is too large
Load Diff
+128
@@ -0,0 +1,128 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""A subclass of unittest.TestCase which checks for reference leaks.
|
||||
|
||||
To use:
|
||||
- Use testing_refleak.BaseTestCase instead of unittest.TestCase
|
||||
- Configure and compile Python with --with-pydebug
|
||||
|
||||
If sys.gettotalrefcount() is not available (because Python was built without
|
||||
the Py_DEBUG option), then this module is a no-op and tests will run normally.
|
||||
"""
|
||||
|
||||
import copyreg
|
||||
import gc
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
|
||||
class LocalTestResult(unittest.TestResult):
|
||||
"""A TestResult which forwards events to a parent object, except for Skips."""
|
||||
|
||||
def __init__(self, parent_result):
|
||||
unittest.TestResult.__init__(self)
|
||||
self.parent_result = parent_result
|
||||
|
||||
def addError(self, test, error):
|
||||
self.parent_result.addError(test, error)
|
||||
|
||||
def addFailure(self, test, error):
|
||||
self.parent_result.addFailure(test, error)
|
||||
|
||||
def addSkip(self, test, reason):
|
||||
pass
|
||||
|
||||
def addDuration(self, test, duration):
|
||||
pass
|
||||
|
||||
|
||||
class ReferenceLeakCheckerMixin(object):
|
||||
"""A mixin class for TestCase, which checks reference counts."""
|
||||
|
||||
NB_RUNS = 3
|
||||
|
||||
def run(self, result=None):
|
||||
testMethod = getattr(self, self._testMethodName)
|
||||
expecting_failure_method = getattr(testMethod, "__unittest_expecting_failure__", False)
|
||||
expecting_failure_class = getattr(self, "__unittest_expecting_failure__", False)
|
||||
if expecting_failure_class or expecting_failure_method:
|
||||
return
|
||||
|
||||
# python_message.py registers all Message classes to some pickle global
|
||||
# registry, which makes the classes immortal.
|
||||
# We save a copy of this registry, and reset it before we could references.
|
||||
self._saved_pickle_registry = copyreg.dispatch_table.copy()
|
||||
|
||||
# Run the test twice, to warm up the instance attributes.
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=result)
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=result)
|
||||
|
||||
local_result = LocalTestResult(result)
|
||||
num_flakes = 0
|
||||
refcount_deltas = []
|
||||
|
||||
# Observe the refcount, then create oldrefcount which actually makes the
|
||||
# refcount 1 higher than the recorded value immediately
|
||||
oldrefcount = self._getRefcounts()
|
||||
while len(refcount_deltas) < self.NB_RUNS:
|
||||
oldrefcount = self._getRefcounts()
|
||||
super(ReferenceLeakCheckerMixin, self).run(result=local_result)
|
||||
newrefcount = self._getRefcounts()
|
||||
# If the GC was able to collect some objects after the call to run() that
|
||||
# it could not collect before the call, then the counts won't match.
|
||||
if newrefcount < oldrefcount and num_flakes < 2:
|
||||
# This result is (probably) a flake -- garbage collectors aren't very
|
||||
# predictable, but a lower ending refcount is the opposite of the
|
||||
# failure we are testing for. If the result is repeatable, then we will
|
||||
# eventually report it, but not after trying to eliminate it.
|
||||
num_flakes += 1
|
||||
continue
|
||||
num_flakes = 0
|
||||
refcount_deltas.append(newrefcount - oldrefcount)
|
||||
print(refcount_deltas, self)
|
||||
|
||||
try:
|
||||
self.assertEqual(refcount_deltas, [0] * self.NB_RUNS)
|
||||
except Exception: # pylint: disable=broad-except
|
||||
result.addError(self, sys.exc_info())
|
||||
|
||||
def _getRefcounts(self):
|
||||
if hasattr(sys, "_clear_internal_caches"): # Since 3.13
|
||||
sys._clear_internal_caches() # pylint: disable=protected-access
|
||||
else:
|
||||
sys._clear_type_cache() # pylint: disable=protected-access
|
||||
copyreg.dispatch_table.clear()
|
||||
copyreg.dispatch_table.update(self._saved_pickle_registry)
|
||||
# It is sometimes necessary to gc.collect() multiple times, to ensure
|
||||
# that all objects can be collected.
|
||||
gc.collect()
|
||||
gc.collect()
|
||||
gc.collect()
|
||||
return sys.gettotalrefcount()
|
||||
|
||||
|
||||
if hasattr(sys, 'gettotalrefcount'):
|
||||
|
||||
def TestCase(test_class):
|
||||
new_bases = (ReferenceLeakCheckerMixin,) + test_class.__bases__
|
||||
new_class = type(test_class)(
|
||||
test_class.__name__, new_bases, dict(test_class.__dict__))
|
||||
return new_class
|
||||
SkipReferenceLeakChecker = unittest.skip
|
||||
|
||||
else:
|
||||
# When PyDEBUG is not enabled, run the tests normally.
|
||||
|
||||
def TestCase(test_class):
|
||||
return test_class
|
||||
|
||||
def SkipReferenceLeakChecker(reason):
|
||||
del reason # Don't skip, so don't need a reason.
|
||||
def Same(func):
|
||||
return func
|
||||
return Same
|
||||
+455
@@ -0,0 +1,455 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides type checking routines.
|
||||
|
||||
This module defines type checking utilities in the forms of dictionaries:
|
||||
|
||||
VALUE_CHECKERS: A dictionary of field types and a value validation object.
|
||||
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
|
||||
function.
|
||||
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
|
||||
function.
|
||||
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
|
||||
corresponding wire types.
|
||||
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
|
||||
function.
|
||||
"""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
import numbers
|
||||
import struct
|
||||
import warnings
|
||||
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf.internal import decoder
|
||||
from google.protobuf.internal import encoder
|
||||
from google.protobuf.internal import wire_format
|
||||
|
||||
_FieldDescriptor = descriptor.FieldDescriptor
|
||||
# TODO: Remove this warning count after 34.0
|
||||
# Assign bool to int/enum warnings will print 100 times at most which should
|
||||
# be enough for users to notice and do not cause timeout.
|
||||
_BoolWarningCount = 100
|
||||
|
||||
def TruncateToFourByteFloat(original):
|
||||
return struct.unpack('<f', struct.pack('<f', original))[0]
|
||||
|
||||
|
||||
def ToShortestFloat(original):
|
||||
"""Returns the shortest float that has same value in wire."""
|
||||
# All 4 byte floats have between 6 and 9 significant digits, so we
|
||||
# start with 6 as the lower bound.
|
||||
# It has to be iterative because use '.9g' directly can not get rid
|
||||
# of the noises for most values. For example if set a float_field=0.9
|
||||
# use '.9g' will print 0.899999976.
|
||||
precision = 6
|
||||
rounded = float('{0:.{1}g}'.format(original, precision))
|
||||
while TruncateToFourByteFloat(rounded) != original:
|
||||
precision += 1
|
||||
rounded = float('{0:.{1}g}'.format(original, precision))
|
||||
return rounded
|
||||
|
||||
|
||||
def GetTypeChecker(field):
|
||||
"""Returns a type checker for a message field of the specified types.
|
||||
|
||||
Args:
|
||||
field: FieldDescriptor object for this field.
|
||||
|
||||
Returns:
|
||||
An instance of TypeChecker which can be used to verify the types
|
||||
of values assigned to a field of the specified type.
|
||||
"""
|
||||
if (field.cpp_type == _FieldDescriptor.CPPTYPE_STRING and
|
||||
field.type == _FieldDescriptor.TYPE_STRING):
|
||||
return UnicodeValueChecker()
|
||||
if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM:
|
||||
if field.enum_type.is_closed:
|
||||
return EnumValueChecker(field.enum_type)
|
||||
else:
|
||||
# When open enums are supported, any int32 can be assigned.
|
||||
return _VALUE_CHECKERS[_FieldDescriptor.CPPTYPE_INT32]
|
||||
return _VALUE_CHECKERS[field.cpp_type]
|
||||
|
||||
|
||||
# None of the typecheckers below make any attempt to guard against people
|
||||
# subclassing builtin types and doing weird things. We're not trying to
|
||||
# protect against malicious clients here, just people accidentally shooting
|
||||
# themselves in the foot in obvious ways.
|
||||
class TypeChecker(object):
|
||||
|
||||
"""Type checker used to catch type errors as early as possible
|
||||
when the client is setting scalar fields in protocol messages.
|
||||
"""
|
||||
|
||||
def __init__(self, *acceptable_types):
|
||||
self._acceptable_types = acceptable_types
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Type check the provided value and return it.
|
||||
|
||||
The returned value might have been normalized to another type.
|
||||
"""
|
||||
if not isinstance(proposed_value, self._acceptable_types):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), self._acceptable_types))
|
||||
raise TypeError(message)
|
||||
return proposed_value
|
||||
|
||||
|
||||
class TypeCheckerWithDefault(TypeChecker):
|
||||
|
||||
def __init__(self, default_value, *acceptable_types):
|
||||
TypeChecker.__init__(self, *acceptable_types)
|
||||
self._default_value = default_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return self._default_value
|
||||
|
||||
|
||||
class BoolValueChecker(object):
|
||||
"""Type checker used for bool fields."""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
if not hasattr(proposed_value, '__index__'):
|
||||
# Under NumPy 2.3, numpy.bool does not have an __index__ method.
|
||||
if (type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'bool'):
|
||||
return bool(proposed_value)
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bool, int)))
|
||||
raise TypeError(message)
|
||||
|
||||
if (type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bool, int)))
|
||||
raise TypeError(message)
|
||||
|
||||
return bool(proposed_value)
|
||||
|
||||
def DefaultValue(self):
|
||||
return False
|
||||
|
||||
|
||||
# IntValueChecker and its subclasses perform integer type-checks
|
||||
# and bounds-checks.
|
||||
class IntValueChecker(object):
|
||||
|
||||
"""Checker used for integer fields. Performs type-check and range check."""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
global _BoolWarningCount
|
||||
if type(proposed_value) == bool and _BoolWarningCount > 0:
|
||||
_BoolWarningCount -= 1
|
||||
message = (
|
||||
'%.1024r has type %s, but expected one of: %s. This warning '
|
||||
'will turn into error in 7.34.0, please fix it before that.'
|
||||
% (
|
||||
proposed_value,
|
||||
type(proposed_value),
|
||||
(int,),
|
||||
)
|
||||
)
|
||||
# TODO: Raise errors in 2026 Q1 release
|
||||
warnings.warn(message)
|
||||
|
||||
if not hasattr(proposed_value, '__index__') or (
|
||||
type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (int,)))
|
||||
raise TypeError(message)
|
||||
|
||||
if not self._MIN <= int(proposed_value) <= self._MAX:
|
||||
raise ValueError('Value out of range: %d' % proposed_value)
|
||||
# We force all values to int to make alternate implementations where the
|
||||
# distinction is more significant (e.g. the C++ implementation) simpler.
|
||||
proposed_value = int(proposed_value)
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return 0
|
||||
|
||||
|
||||
class EnumValueChecker(object):
|
||||
|
||||
"""Checker used for enum fields. Performs type-check and range check."""
|
||||
|
||||
def __init__(self, enum_type):
|
||||
self._enum_type = enum_type
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
global _BoolWarningCount
|
||||
if type(proposed_value) == bool and _BoolWarningCount > 0:
|
||||
_BoolWarningCount -= 1
|
||||
message = (
|
||||
'%.1024r has type %s, but expected one of: %s. This warning '
|
||||
'will turn into error in 7.34.0, please fix it before that.'
|
||||
% (
|
||||
proposed_value,
|
||||
type(proposed_value),
|
||||
(int,),
|
||||
)
|
||||
)
|
||||
# TODO: Raise errors in 2026 Q1 release
|
||||
warnings.warn(message)
|
||||
if not isinstance(proposed_value, numbers.Integral):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (int,)))
|
||||
raise TypeError(message)
|
||||
if int(proposed_value) not in self._enum_type.values_by_number:
|
||||
raise ValueError('Unknown enum value: %d' % proposed_value)
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return self._enum_type.values[0].number
|
||||
|
||||
|
||||
class UnicodeValueChecker(object):
|
||||
|
||||
"""Checker used for string fields.
|
||||
|
||||
Always returns a unicode value, even if the input is of type str.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
if not isinstance(proposed_value, (bytes, str)):
|
||||
message = ('%.1024r has type %s, but expected one of: %s' %
|
||||
(proposed_value, type(proposed_value), (bytes, str)))
|
||||
raise TypeError(message)
|
||||
|
||||
# If the value is of type 'bytes' make sure that it is valid UTF-8 data.
|
||||
if isinstance(proposed_value, bytes):
|
||||
try:
|
||||
proposed_value = proposed_value.decode('utf-8')
|
||||
except UnicodeDecodeError:
|
||||
raise ValueError('%.1024r has type bytes, but isn\'t valid UTF-8 '
|
||||
'encoding. Non-UTF-8 strings must be converted to '
|
||||
'unicode objects before being added.' %
|
||||
(proposed_value))
|
||||
else:
|
||||
try:
|
||||
proposed_value.encode('utf8')
|
||||
except UnicodeEncodeError:
|
||||
raise ValueError('%.1024r isn\'t a valid unicode string and '
|
||||
'can\'t be encoded in UTF-8.'%
|
||||
(proposed_value))
|
||||
|
||||
return proposed_value
|
||||
|
||||
def DefaultValue(self):
|
||||
return u""
|
||||
|
||||
|
||||
class Int32ValueChecker(IntValueChecker):
|
||||
# We're sure to use ints instead of longs here since comparison may be more
|
||||
# efficient.
|
||||
_MIN = -2147483648
|
||||
_MAX = 2147483647
|
||||
|
||||
|
||||
class Uint32ValueChecker(IntValueChecker):
|
||||
_MIN = 0
|
||||
_MAX = (1 << 32) - 1
|
||||
|
||||
|
||||
class Int64ValueChecker(IntValueChecker):
|
||||
_MIN = -(1 << 63)
|
||||
_MAX = (1 << 63) - 1
|
||||
|
||||
|
||||
class Uint64ValueChecker(IntValueChecker):
|
||||
_MIN = 0
|
||||
_MAX = (1 << 64) - 1
|
||||
|
||||
|
||||
# The max 4 bytes float is about 3.4028234663852886e+38
|
||||
_FLOAT_MAX = float.fromhex('0x1.fffffep+127')
|
||||
_FLOAT_MIN = -_FLOAT_MAX
|
||||
_MAX_FLOAT_AS_DOUBLE_ROUNDED = 3.4028235677973366e38
|
||||
_INF = float('inf')
|
||||
_NEG_INF = float('-inf')
|
||||
|
||||
|
||||
class DoubleValueChecker(object):
|
||||
"""Checker used for double fields.
|
||||
|
||||
Performs type-check and range check.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Check and convert proposed_value to float."""
|
||||
if (not hasattr(proposed_value, '__float__') and
|
||||
not hasattr(proposed_value, '__index__')) or (
|
||||
type(proposed_value).__module__ == 'numpy' and
|
||||
type(proposed_value).__name__ == 'ndarray'):
|
||||
message = ('%.1024r has type %s, but expected one of: int, float' %
|
||||
(proposed_value, type(proposed_value)))
|
||||
raise TypeError(message)
|
||||
return float(proposed_value)
|
||||
|
||||
def DefaultValue(self):
|
||||
return 0.0
|
||||
|
||||
|
||||
class FloatValueChecker(DoubleValueChecker):
|
||||
"""Checker used for float fields.
|
||||
|
||||
Performs type-check and range check.
|
||||
|
||||
Values exceeding a 32-bit float will be converted to inf/-inf.
|
||||
"""
|
||||
|
||||
def CheckValue(self, proposed_value):
|
||||
"""Check and convert proposed_value to float."""
|
||||
converted_value = super().CheckValue(proposed_value)
|
||||
# This inf rounding matches the C++ proto SafeDoubleToFloat logic.
|
||||
if converted_value > _FLOAT_MAX:
|
||||
if converted_value <= _MAX_FLOAT_AS_DOUBLE_ROUNDED:
|
||||
return _FLOAT_MAX
|
||||
return _INF
|
||||
if converted_value < _FLOAT_MIN:
|
||||
if converted_value >= -_MAX_FLOAT_AS_DOUBLE_ROUNDED:
|
||||
return _FLOAT_MIN
|
||||
return _NEG_INF
|
||||
|
||||
return TruncateToFourByteFloat(converted_value)
|
||||
|
||||
# Type-checkers for all scalar CPPTYPEs.
|
||||
_VALUE_CHECKERS = {
|
||||
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_DOUBLE: DoubleValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_BOOL: BoolValueChecker(),
|
||||
_FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes),
|
||||
}
|
||||
|
||||
|
||||
# Map from field type to a function F, such that F(field_num, value)
|
||||
# gives the total byte size for a value of the given type. This
|
||||
# byte size includes tag information and any other additional space
|
||||
# associated with serializing "value".
|
||||
TYPE_TO_BYTE_SIZE_FN = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
|
||||
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
|
||||
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
|
||||
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
|
||||
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
|
||||
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
|
||||
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
|
||||
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
|
||||
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
|
||||
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
|
||||
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
|
||||
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
|
||||
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
|
||||
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
|
||||
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
|
||||
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
|
||||
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
|
||||
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
|
||||
}
|
||||
|
||||
|
||||
# Maps from field types to encoder constructors.
|
||||
TYPE_TO_ENCODER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder,
|
||||
_FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder,
|
||||
_FieldDescriptor.TYPE_INT64: encoder.Int64Encoder,
|
||||
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder,
|
||||
_FieldDescriptor.TYPE_INT32: encoder.Int32Encoder,
|
||||
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder,
|
||||
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder,
|
||||
_FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder,
|
||||
_FieldDescriptor.TYPE_STRING: encoder.StringEncoder,
|
||||
_FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder,
|
||||
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder,
|
||||
_FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder,
|
||||
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder,
|
||||
_FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder,
|
||||
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder,
|
||||
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder,
|
||||
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder,
|
||||
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder,
|
||||
}
|
||||
|
||||
|
||||
# Maps from field types to sizer constructors.
|
||||
TYPE_TO_SIZER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer,
|
||||
_FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer,
|
||||
_FieldDescriptor.TYPE_INT64: encoder.Int64Sizer,
|
||||
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer,
|
||||
_FieldDescriptor.TYPE_INT32: encoder.Int32Sizer,
|
||||
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer,
|
||||
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer,
|
||||
_FieldDescriptor.TYPE_BOOL: encoder.BoolSizer,
|
||||
_FieldDescriptor.TYPE_STRING: encoder.StringSizer,
|
||||
_FieldDescriptor.TYPE_GROUP: encoder.GroupSizer,
|
||||
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer,
|
||||
_FieldDescriptor.TYPE_BYTES: encoder.BytesSizer,
|
||||
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer,
|
||||
_FieldDescriptor.TYPE_ENUM: encoder.EnumSizer,
|
||||
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer,
|
||||
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer,
|
||||
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer,
|
||||
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer,
|
||||
}
|
||||
|
||||
|
||||
# Maps from field type to a decoder constructor.
|
||||
TYPE_TO_DECODER = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder,
|
||||
_FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder,
|
||||
_FieldDescriptor.TYPE_INT64: decoder.Int64Decoder,
|
||||
_FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder,
|
||||
_FieldDescriptor.TYPE_INT32: decoder.Int32Decoder,
|
||||
_FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder,
|
||||
_FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder,
|
||||
_FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder,
|
||||
_FieldDescriptor.TYPE_STRING: decoder.StringDecoder,
|
||||
_FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder,
|
||||
_FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder,
|
||||
_FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder,
|
||||
_FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder,
|
||||
_FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder,
|
||||
_FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder,
|
||||
_FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder,
|
||||
_FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder,
|
||||
_FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder,
|
||||
}
|
||||
|
||||
# Maps from field type to expected wiretype.
|
||||
FIELD_TYPE_TO_WIRE_TYPE = {
|
||||
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_STRING:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
|
||||
_FieldDescriptor.TYPE_MESSAGE:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_BYTES:
|
||||
wire_format.WIRETYPE_LENGTH_DELIMITED,
|
||||
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
|
||||
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
|
||||
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
|
||||
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
|
||||
}
|
||||
+695
@@ -0,0 +1,695 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains well known classes.
|
||||
|
||||
This files defines well known classes which need extra maintenance including:
|
||||
- Any
|
||||
- Duration
|
||||
- FieldMask
|
||||
- Struct
|
||||
- Timestamp
|
||||
"""
|
||||
|
||||
__author__ = 'jieluo@google.com (Jie Luo)'
|
||||
|
||||
import calendar
|
||||
import collections.abc
|
||||
import datetime
|
||||
from typing import Union
|
||||
import warnings
|
||||
from google.protobuf.internal import field_mask
|
||||
|
||||
FieldMask = field_mask.FieldMask
|
||||
|
||||
_TIMESTAMPFORMAT = '%Y-%m-%dT%H:%M:%S'
|
||||
_NANOS_PER_SECOND = 1000000000
|
||||
_NANOS_PER_MILLISECOND = 1000000
|
||||
_NANOS_PER_MICROSECOND = 1000
|
||||
_MILLIS_PER_SECOND = 1000
|
||||
_MICROS_PER_SECOND = 1000000
|
||||
_SECONDS_PER_DAY = 24 * 3600
|
||||
_DURATION_SECONDS_MAX = 315576000000
|
||||
_TIMESTAMP_SECONDS_MIN = -62135596800
|
||||
_TIMESTAMP_SECONDS_MAX = 253402300799
|
||||
|
||||
_EPOCH_DATETIME_NAIVE = datetime.datetime(1970, 1, 1, tzinfo=None)
|
||||
_EPOCH_DATETIME_AWARE = _EPOCH_DATETIME_NAIVE.replace(
|
||||
tzinfo=datetime.timezone.utc
|
||||
)
|
||||
|
||||
|
||||
class Any(object):
|
||||
"""Class for Any Message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def Pack(
|
||||
self, msg, type_url_prefix='type.googleapis.com/', deterministic=None
|
||||
):
|
||||
"""Packs the specified message into current Any message."""
|
||||
if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/':
|
||||
self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
|
||||
else:
|
||||
self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
|
||||
self.value = msg.SerializeToString(deterministic=deterministic)
|
||||
|
||||
def Unpack(self, msg):
|
||||
"""Unpacks the current Any message into specified message."""
|
||||
descriptor = msg.DESCRIPTOR
|
||||
if not self.Is(descriptor):
|
||||
return False
|
||||
msg.ParseFromString(self.value)
|
||||
return True
|
||||
|
||||
def TypeName(self):
|
||||
"""Returns the protobuf type name of the inner message."""
|
||||
# Only last part is to be used: b/25630112
|
||||
return self.type_url.rpartition('/')[2]
|
||||
|
||||
def Is(self, descriptor):
|
||||
"""Checks if this Any represents the given protobuf type."""
|
||||
return '/' in self.type_url and self.TypeName() == descriptor.full_name
|
||||
|
||||
|
||||
class Timestamp(object):
|
||||
"""Class for Timestamp message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts Timestamp to RFC 3339 date string format.
|
||||
|
||||
Returns:
|
||||
A string converted from timestamp. The string is always Z-normalized
|
||||
and uses 3, 6 or 9 fractional digits as required to represent the
|
||||
exact time. Example of the return format: '1972-01-01T10:00:20.021Z'
|
||||
"""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
nanos = self.nanos
|
||||
seconds = self.seconds % _SECONDS_PER_DAY
|
||||
days = (self.seconds - seconds) // _SECONDS_PER_DAY
|
||||
dt = datetime.datetime(1970, 1, 1) + datetime.timedelta(days, seconds)
|
||||
|
||||
result = dt.isoformat()
|
||||
if (nanos % 1e9) == 0:
|
||||
# If there are 0 fractional digits, the fractional
|
||||
# point '.' should be omitted when serializing.
|
||||
return result + 'Z'
|
||||
if (nanos % 1e6) == 0:
|
||||
# Serialize 3 fractional digits.
|
||||
return result + '.%03dZ' % (nanos / 1e6)
|
||||
if (nanos % 1e3) == 0:
|
||||
# Serialize 6 fractional digits.
|
||||
return result + '.%06dZ' % (nanos / 1e3)
|
||||
# Serialize 9 fractional digits.
|
||||
return result + '.%09dZ' % nanos
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Parse a RFC 3339 date string format to Timestamp.
|
||||
|
||||
Args:
|
||||
value: A date string. Any fractional digits (or none) and any offset are
|
||||
accepted as long as they fit into nano-seconds precision. Example of
|
||||
accepted format: '1972-01-01T10:00:20.021-05:00'
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('Timestamp JSON value not a string: {!r}'.format(value))
|
||||
timezone_offset = value.find('Z')
|
||||
if timezone_offset == -1:
|
||||
timezone_offset = value.find('+')
|
||||
if timezone_offset == -1:
|
||||
timezone_offset = value.rfind('-')
|
||||
if timezone_offset == -1:
|
||||
raise ValueError(
|
||||
'Failed to parse timestamp: missing valid timezone offset.'
|
||||
)
|
||||
time_value = value[0:timezone_offset]
|
||||
# Parse datetime and nanos.
|
||||
point_position = time_value.find('.')
|
||||
if point_position == -1:
|
||||
second_value = time_value
|
||||
nano_value = ''
|
||||
else:
|
||||
second_value = time_value[:point_position]
|
||||
nano_value = time_value[point_position + 1 :]
|
||||
if 't' in second_value:
|
||||
raise ValueError(
|
||||
"time data '{0}' does not match format '%Y-%m-%dT%H:%M:%S', "
|
||||
"lowercase 't' is not accepted".format(second_value)
|
||||
)
|
||||
date_object = datetime.datetime.strptime(second_value, _TIMESTAMPFORMAT)
|
||||
td = date_object - datetime.datetime(1970, 1, 1)
|
||||
seconds = td.seconds + td.days * _SECONDS_PER_DAY
|
||||
if len(nano_value) > 9:
|
||||
raise ValueError(
|
||||
'Failed to parse Timestamp: nanos {0} more than '
|
||||
'9 fractional digits.'.format(nano_value)
|
||||
)
|
||||
if nano_value:
|
||||
nanos = round(float('0.' + nano_value) * 1e9)
|
||||
else:
|
||||
nanos = 0
|
||||
# Parse timezone offsets.
|
||||
if value[timezone_offset] == 'Z':
|
||||
if len(value) != timezone_offset + 1:
|
||||
raise ValueError(
|
||||
'Failed to parse timestamp: invalid trailing data {0}.'.format(
|
||||
value
|
||||
)
|
||||
)
|
||||
else:
|
||||
timezone = value[timezone_offset:]
|
||||
pos = timezone.find(':')
|
||||
if pos == -1:
|
||||
raise ValueError('Invalid timezone offset value: {0}.'.format(timezone))
|
||||
if timezone[0] == '+':
|
||||
seconds -= (int(timezone[1:pos]) * 60 + int(timezone[pos + 1 :])) * 60
|
||||
else:
|
||||
seconds += (int(timezone[1:pos]) * 60 + int(timezone[pos + 1 :])) * 60
|
||||
# Set seconds and nanos
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = int(seconds)
|
||||
self.nanos = int(nanos)
|
||||
|
||||
def GetCurrentTime(self):
|
||||
"""Get the current UTC into Timestamp."""
|
||||
self.FromDatetime(datetime.datetime.now(tz=datetime.timezone.utc))
|
||||
|
||||
def ToNanoseconds(self):
|
||||
"""Converts Timestamp to nanoseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return self.seconds * _NANOS_PER_SECOND + self.nanos
|
||||
|
||||
def ToMicroseconds(self):
|
||||
"""Converts Timestamp to microseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return (
|
||||
self.seconds * _MICROS_PER_SECOND + self.nanos // _NANOS_PER_MICROSECOND
|
||||
)
|
||||
|
||||
def ToMilliseconds(self):
|
||||
"""Converts Timestamp to milliseconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return (
|
||||
self.seconds * _MILLIS_PER_SECOND + self.nanos // _NANOS_PER_MILLISECOND
|
||||
)
|
||||
|
||||
def ToSeconds(self):
|
||||
"""Converts Timestamp to seconds since epoch."""
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
return self.seconds
|
||||
|
||||
def FromNanoseconds(self, nanos):
|
||||
"""Converts nanoseconds since epoch to Timestamp."""
|
||||
seconds = nanos // _NANOS_PER_SECOND
|
||||
nanos = nanos % _NANOS_PER_SECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromMicroseconds(self, micros):
|
||||
"""Converts microseconds since epoch to Timestamp."""
|
||||
seconds = micros // _MICROS_PER_SECOND
|
||||
nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromMilliseconds(self, millis):
|
||||
"""Converts milliseconds since epoch to Timestamp."""
|
||||
seconds = millis // _MILLIS_PER_SECOND
|
||||
nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def FromSeconds(self, seconds):
|
||||
"""Converts seconds since epoch to Timestamp."""
|
||||
_CheckTimestampValid(seconds, 0)
|
||||
self.seconds = seconds
|
||||
self.nanos = 0
|
||||
|
||||
def ToDatetime(self, tzinfo=None):
|
||||
"""Converts Timestamp to a datetime.
|
||||
|
||||
Args:
|
||||
tzinfo: A datetime.tzinfo subclass; defaults to None.
|
||||
|
||||
Returns:
|
||||
If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone
|
||||
information, i.e. not aware that it's UTC).
|
||||
|
||||
Otherwise, returns a timezone-aware datetime in the input timezone.
|
||||
"""
|
||||
# Using datetime.fromtimestamp for this would avoid constructing an extra
|
||||
# timedelta object and possibly an extra datetime. Unfortunately, that has
|
||||
# the disadvantage of not handling the full precision (on all platforms, see
|
||||
# https://github.com/python/cpython/issues/109849) or full range (on some
|
||||
# platforms, see https://github.com/python/cpython/issues/110042) of
|
||||
# datetime.
|
||||
_CheckTimestampValid(self.seconds, self.nanos)
|
||||
delta = datetime.timedelta(
|
||||
seconds=self.seconds,
|
||||
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND),
|
||||
)
|
||||
if tzinfo is None:
|
||||
return _EPOCH_DATETIME_NAIVE + delta
|
||||
else:
|
||||
# Note the tz conversion has to come after the timedelta arithmetic.
|
||||
return (_EPOCH_DATETIME_AWARE + delta).astimezone(tzinfo)
|
||||
|
||||
def FromDatetime(self, dt):
|
||||
"""Converts datetime to Timestamp.
|
||||
|
||||
Args:
|
||||
dt: A datetime. If it's timezone-naive, it's assumed to be in UTC.
|
||||
"""
|
||||
# Using this guide: http://wiki.python.org/moin/WorkingWithTime
|
||||
# And this conversion guide: http://docs.python.org/library/time.html
|
||||
|
||||
# Turn the date parameter into a tuple (struct_time) that can then be
|
||||
# manipulated into a long value of seconds. During the conversion from
|
||||
# struct_time to long, the source date in UTC, and so it follows that the
|
||||
# correct transformation is calendar.timegm()
|
||||
try:
|
||||
seconds = calendar.timegm(dt.utctimetuple())
|
||||
nanos = dt.microsecond * _NANOS_PER_MICROSECOND
|
||||
except AttributeError as e:
|
||||
raise AttributeError(
|
||||
'Fail to convert to Timestamp. Expected a datetime like '
|
||||
'object got {0} : {1}'.format(type(dt).__name__, e)
|
||||
) from e
|
||||
_CheckTimestampValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def _internal_assign(self, dt):
|
||||
self.FromDatetime(dt)
|
||||
|
||||
def __add__(self, value) -> datetime.datetime:
|
||||
if isinstance(value, Duration):
|
||||
return self.ToDatetime() + value.ToTimedelta()
|
||||
return self.ToDatetime() + value
|
||||
|
||||
__radd__ = __add__
|
||||
|
||||
def __sub__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
if isinstance(value, Timestamp):
|
||||
return self.ToDatetime() - value.ToDatetime()
|
||||
elif isinstance(value, Duration):
|
||||
return self.ToDatetime() - value.ToTimedelta()
|
||||
return self.ToDatetime() - value
|
||||
|
||||
def __rsub__(self, dt) -> datetime.timedelta:
|
||||
return dt - self.ToDatetime()
|
||||
|
||||
|
||||
def _CheckTimestampValid(seconds, nanos):
|
||||
if seconds < _TIMESTAMP_SECONDS_MIN or seconds > _TIMESTAMP_SECONDS_MAX:
|
||||
raise ValueError(
|
||||
'Timestamp is not valid: Seconds {0} must be in range '
|
||||
'[-62135596800, 253402300799].'.format(seconds))
|
||||
if nanos < 0 or nanos >= _NANOS_PER_SECOND:
|
||||
raise ValueError(
|
||||
'Timestamp is not valid: Nanos {} must be in a range '
|
||||
'[0, 999999].'.format(nanos)
|
||||
)
|
||||
|
||||
|
||||
class Duration(object):
|
||||
"""Class for Duration message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def ToJsonString(self):
|
||||
"""Converts Duration to string format.
|
||||
|
||||
Returns:
|
||||
A string converted from self. The string format will contains
|
||||
3, 6, or 9 fractional digits depending on the precision required to
|
||||
represent the exact Duration value. For example: "1s", "1.010s",
|
||||
"1.000000100s", "-3.100s"
|
||||
"""
|
||||
_CheckDurationValid(self.seconds, self.nanos)
|
||||
if self.seconds < 0 or self.nanos < 0:
|
||||
result = '-'
|
||||
seconds = -self.seconds + int((0 - self.nanos) // 1e9)
|
||||
nanos = (0 - self.nanos) % 1e9
|
||||
else:
|
||||
result = ''
|
||||
seconds = self.seconds + int(self.nanos // 1e9)
|
||||
nanos = self.nanos % 1e9
|
||||
result += '%d' % seconds
|
||||
if (nanos % 1e9) == 0:
|
||||
# If there are 0 fractional digits, the fractional
|
||||
# point '.' should be omitted when serializing.
|
||||
return result + 's'
|
||||
if (nanos % 1e6) == 0:
|
||||
# Serialize 3 fractional digits.
|
||||
return result + '.%03ds' % (nanos / 1e6)
|
||||
if (nanos % 1e3) == 0:
|
||||
# Serialize 6 fractional digits.
|
||||
return result + '.%06ds' % (nanos / 1e3)
|
||||
# Serialize 9 fractional digits.
|
||||
return result + '.%09ds' % nanos
|
||||
|
||||
def FromJsonString(self, value):
|
||||
"""Converts a string to Duration.
|
||||
|
||||
Args:
|
||||
value: A string to be converted. The string must end with 's'. Any
|
||||
fractional digits (or none) are accepted as long as they fit into
|
||||
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s
|
||||
|
||||
Raises:
|
||||
ValueError: On parsing problems.
|
||||
"""
|
||||
if not isinstance(value, str):
|
||||
raise ValueError('Duration JSON value not a string: {!r}'.format(value))
|
||||
if len(value) < 1 or value[-1] != 's':
|
||||
raise ValueError('Duration must end with letter "s": {0}.'.format(value))
|
||||
try:
|
||||
pos = value.find('.')
|
||||
if pos == -1:
|
||||
seconds = int(value[:-1])
|
||||
nanos = 0
|
||||
else:
|
||||
seconds = int(value[:pos])
|
||||
if value[0] == '-':
|
||||
nanos = int(round(float('-0{0}'.format(value[pos:-1])) * 1e9))
|
||||
else:
|
||||
nanos = int(round(float('0{0}'.format(value[pos:-1])) * 1e9))
|
||||
_CheckDurationValid(seconds, nanos)
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
except ValueError as e:
|
||||
raise ValueError("Couldn't parse duration: {0} : {1}.".format(value, e))
|
||||
|
||||
def ToNanoseconds(self):
|
||||
"""Converts a Duration to nanoseconds."""
|
||||
return self.seconds * _NANOS_PER_SECOND + self.nanos
|
||||
|
||||
def ToMicroseconds(self):
|
||||
"""Converts a Duration to microseconds."""
|
||||
micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND)
|
||||
return self.seconds * _MICROS_PER_SECOND + micros
|
||||
|
||||
def ToMilliseconds(self):
|
||||
"""Converts a Duration to milliseconds."""
|
||||
millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND)
|
||||
return self.seconds * _MILLIS_PER_SECOND + millis
|
||||
|
||||
def ToSeconds(self):
|
||||
"""Converts a Duration to seconds."""
|
||||
return self.seconds
|
||||
|
||||
def FromNanoseconds(self, nanos):
|
||||
"""Converts nanoseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
nanos // _NANOS_PER_SECOND, nanos % _NANOS_PER_SECOND
|
||||
)
|
||||
|
||||
def FromMicroseconds(self, micros):
|
||||
"""Converts microseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
micros // _MICROS_PER_SECOND,
|
||||
(micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND,
|
||||
)
|
||||
|
||||
def FromMilliseconds(self, millis):
|
||||
"""Converts milliseconds to Duration."""
|
||||
self._NormalizeDuration(
|
||||
millis // _MILLIS_PER_SECOND,
|
||||
(millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND,
|
||||
)
|
||||
|
||||
def FromSeconds(self, seconds):
|
||||
"""Converts seconds to Duration."""
|
||||
self.seconds = seconds
|
||||
self.nanos = 0
|
||||
|
||||
def ToTimedelta(self) -> datetime.timedelta:
|
||||
"""Converts Duration to timedelta."""
|
||||
return datetime.timedelta(
|
||||
seconds=self.seconds,
|
||||
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND),
|
||||
)
|
||||
|
||||
def FromTimedelta(self, td):
|
||||
"""Converts timedelta to Duration."""
|
||||
try:
|
||||
self._NormalizeDuration(
|
||||
td.seconds + td.days * _SECONDS_PER_DAY,
|
||||
td.microseconds * _NANOS_PER_MICROSECOND,
|
||||
)
|
||||
except AttributeError as e:
|
||||
raise AttributeError(
|
||||
'Fail to convert to Duration. Expected a timedelta like '
|
||||
'object got {0}: {1}'.format(type(td).__name__, e)
|
||||
) from e
|
||||
|
||||
def _internal_assign(self, td):
|
||||
self.FromTimedelta(td)
|
||||
|
||||
def _NormalizeDuration(self, seconds, nanos):
|
||||
"""Set Duration by seconds and nanos."""
|
||||
# Force nanos to be negative if the duration is negative.
|
||||
if seconds < 0 and nanos > 0:
|
||||
seconds += 1
|
||||
nanos -= _NANOS_PER_SECOND
|
||||
self.seconds = seconds
|
||||
self.nanos = nanos
|
||||
|
||||
def __add__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
if isinstance(value, Timestamp):
|
||||
return self.ToTimedelta() + value.ToDatetime()
|
||||
return self.ToTimedelta() + value
|
||||
|
||||
__radd__ = __add__
|
||||
|
||||
def __sub__(self, value) -> datetime.timedelta:
|
||||
return self.ToTimedelta() - value
|
||||
|
||||
def __rsub__(self, value) -> Union[datetime.datetime, datetime.timedelta]:
|
||||
return value - self.ToTimedelta()
|
||||
|
||||
|
||||
def _CheckDurationValid(seconds, nanos):
|
||||
if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX:
|
||||
raise ValueError(
|
||||
'Duration is not valid: Seconds {0} must be in range '
|
||||
'[-315576000000, 315576000000].'.format(seconds)
|
||||
)
|
||||
if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND:
|
||||
raise ValueError(
|
||||
'Duration is not valid: Nanos {0} must be in range '
|
||||
'[-999999999, 999999999].'.format(nanos)
|
||||
)
|
||||
if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0):
|
||||
raise ValueError('Duration is not valid: Sign mismatch.')
|
||||
|
||||
|
||||
def _RoundTowardZero(value, divider):
|
||||
"""Truncates the remainder part after division."""
|
||||
# For some languages, the sign of the remainder is implementation
|
||||
# dependent if any of the operands is negative. Here we enforce
|
||||
# "rounded toward zero" semantics. For example, for (-5) / 2 an
|
||||
# implementation may give -3 as the result with the remainder being
|
||||
# 1. This function ensures we always return -2 (closer to zero).
|
||||
result = value // divider
|
||||
remainder = value % divider
|
||||
if result < 0 and remainder > 0:
|
||||
return result + 1
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
def _SetStructValue(struct_value, value):
|
||||
if value is None:
|
||||
struct_value.null_value = 0
|
||||
elif isinstance(value, bool):
|
||||
# Note: this check must come before the number check because in Python
|
||||
# True and False are also considered numbers.
|
||||
struct_value.bool_value = value
|
||||
elif isinstance(value, str):
|
||||
struct_value.string_value = value
|
||||
elif isinstance(value, (int, float)):
|
||||
struct_value.number_value = value
|
||||
elif isinstance(value, (dict, Struct)):
|
||||
struct_value.struct_value.Clear()
|
||||
struct_value.struct_value.update(value)
|
||||
elif isinstance(value, (list, tuple, ListValue)):
|
||||
struct_value.list_value.Clear()
|
||||
struct_value.list_value.extend(value)
|
||||
else:
|
||||
raise ValueError('Unexpected type')
|
||||
|
||||
|
||||
def _GetStructValue(struct_value):
|
||||
which = struct_value.WhichOneof('kind')
|
||||
if which == 'struct_value':
|
||||
return struct_value.struct_value
|
||||
elif which == 'null_value':
|
||||
return None
|
||||
elif which == 'number_value':
|
||||
return struct_value.number_value
|
||||
elif which == 'string_value':
|
||||
return struct_value.string_value
|
||||
elif which == 'bool_value':
|
||||
return struct_value.bool_value
|
||||
elif which == 'list_value':
|
||||
return struct_value.list_value
|
||||
elif which is None:
|
||||
raise ValueError('Value not set')
|
||||
|
||||
|
||||
class Struct(object):
|
||||
"""Class for Struct message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def __getitem__(self, key):
|
||||
return _GetStructValue(self.fields[key])
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
_SetStructValue(self.fields[key], value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self.fields[key]
|
||||
|
||||
def __len__(self):
|
||||
return len(self.fields)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.fields)
|
||||
|
||||
def _internal_assign(self, dictionary):
|
||||
self.Clear()
|
||||
self.update(dictionary)
|
||||
|
||||
def _internal_compare(self, other):
|
||||
size = len(self)
|
||||
if size != len(other):
|
||||
return False
|
||||
for key, value in self.items():
|
||||
if key not in other:
|
||||
return False
|
||||
if isinstance(other[key], (dict, list)):
|
||||
if not value._internal_compare(other[key]):
|
||||
return False
|
||||
elif value != other[key]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def keys(self): # pylint: disable=invalid-name
|
||||
return self.fields.keys()
|
||||
|
||||
def values(self): # pylint: disable=invalid-name
|
||||
return [self[key] for key in self]
|
||||
|
||||
def items(self): # pylint: disable=invalid-name
|
||||
return [(key, self[key]) for key in self]
|
||||
|
||||
def get_or_create_list(self, key):
|
||||
"""Returns a list for this key, creating if it didn't exist already."""
|
||||
if not self.fields[key].HasField('list_value'):
|
||||
# Clear will mark list_value modified which will indeed create a list.
|
||||
self.fields[key].list_value.Clear()
|
||||
return self.fields[key].list_value
|
||||
|
||||
def get_or_create_struct(self, key):
|
||||
"""Returns a struct for this key, creating if it didn't exist already."""
|
||||
if not self.fields[key].HasField('struct_value'):
|
||||
# Clear will mark struct_value modified which will indeed create a struct.
|
||||
self.fields[key].struct_value.Clear()
|
||||
return self.fields[key].struct_value
|
||||
|
||||
def update(self, dictionary): # pylint: disable=invalid-name
|
||||
for key, value in dictionary.items():
|
||||
_SetStructValue(self.fields[key], value)
|
||||
|
||||
|
||||
collections.abc.MutableMapping.register(Struct)
|
||||
|
||||
|
||||
class ListValue(object):
|
||||
"""Class for ListValue message type."""
|
||||
|
||||
__slots__ = ()
|
||||
|
||||
def __len__(self):
|
||||
return len(self.values)
|
||||
|
||||
def append(self, value):
|
||||
_SetStructValue(self.values.add(), value)
|
||||
|
||||
def extend(self, elem_seq):
|
||||
for value in elem_seq:
|
||||
self.append(value)
|
||||
|
||||
def __getitem__(self, index):
|
||||
"""Retrieves item by the specified index."""
|
||||
return _GetStructValue(self.values.__getitem__(index))
|
||||
|
||||
def __setitem__(self, index, value):
|
||||
_SetStructValue(self.values.__getitem__(index), value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self.values[key]
|
||||
|
||||
def _internal_assign(self, elem_seq):
|
||||
self.Clear()
|
||||
self.extend(elem_seq)
|
||||
|
||||
def _internal_compare(self, other):
|
||||
size = len(self)
|
||||
if size != len(other):
|
||||
return False
|
||||
for i in range(size):
|
||||
if isinstance(other[i], (dict, list)):
|
||||
if not self[i]._internal_compare(other[i]):
|
||||
return False
|
||||
elif self[i] != other[i]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def items(self):
|
||||
for i in range(len(self)):
|
||||
yield self[i]
|
||||
|
||||
def add_struct(self):
|
||||
"""Appends and returns a struct value as the next value in the list."""
|
||||
struct_value = self.values.add().struct_value
|
||||
# Clear will mark struct_value modified which will indeed create a struct.
|
||||
struct_value.Clear()
|
||||
return struct_value
|
||||
|
||||
def add_list(self):
|
||||
"""Appends and returns a list value as the next value in the list."""
|
||||
list_value = self.values.add().list_value
|
||||
# Clear will mark list_value modified which will indeed create a list.
|
||||
list_value.Clear()
|
||||
return list_value
|
||||
|
||||
|
||||
collections.abc.MutableSequence.register(ListValue)
|
||||
|
||||
|
||||
# LINT.IfChange(wktbases)
|
||||
WKTBASES = {
|
||||
'google.protobuf.Any': Any,
|
||||
'google.protobuf.Duration': Duration,
|
||||
'google.protobuf.FieldMask': FieldMask,
|
||||
'google.protobuf.ListValue': ListValue,
|
||||
'google.protobuf.Struct': Struct,
|
||||
'google.protobuf.Timestamp': Timestamp,
|
||||
}
|
||||
# LINT.ThenChange(//depot/google.protobuf/compiler/python/pyi_generator.cc:wktbases)
|
||||
+245
@@ -0,0 +1,245 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Constants and static functions to support protocol buffer wire format."""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
import struct
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf import message
|
||||
|
||||
|
||||
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
|
||||
TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
|
||||
|
||||
# These numbers identify the wire type of a protocol buffer value.
|
||||
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
|
||||
# tag-and-type to store one of these WIRETYPE_* constants.
|
||||
# These values must match WireType enum in //google/protobuf/wire_format.h.
|
||||
WIRETYPE_VARINT = 0
|
||||
WIRETYPE_FIXED64 = 1
|
||||
WIRETYPE_LENGTH_DELIMITED = 2
|
||||
WIRETYPE_START_GROUP = 3
|
||||
WIRETYPE_END_GROUP = 4
|
||||
WIRETYPE_FIXED32 = 5
|
||||
_WIRETYPE_MAX = 5
|
||||
|
||||
|
||||
# Bounds for various integer types.
|
||||
INT32_MAX = int((1 << 31) - 1)
|
||||
INT32_MIN = int(-(1 << 31))
|
||||
UINT32_MAX = (1 << 32) - 1
|
||||
|
||||
INT64_MAX = (1 << 63) - 1
|
||||
INT64_MIN = -(1 << 63)
|
||||
UINT64_MAX = (1 << 64) - 1
|
||||
|
||||
# "struct" format strings that will encode/decode the specified formats.
|
||||
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
|
||||
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
|
||||
FORMAT_FLOAT_LITTLE_ENDIAN = '<f'
|
||||
FORMAT_DOUBLE_LITTLE_ENDIAN = '<d'
|
||||
|
||||
|
||||
# We'll have to provide alternate implementations of AppendLittleEndian*() on
|
||||
# any architectures where these checks fail.
|
||||
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
|
||||
raise AssertionError('Format "I" is not a 32-bit number.')
|
||||
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
|
||||
raise AssertionError('Format "Q" is not a 64-bit number.')
|
||||
|
||||
|
||||
def PackTag(field_number, wire_type):
|
||||
"""Returns an unsigned 32-bit integer that encodes the field number and
|
||||
wire type information in standard protocol message wire format.
|
||||
|
||||
Args:
|
||||
field_number: Expected to be an integer in the range [1, 1 << 29)
|
||||
wire_type: One of the WIRETYPE_* constants.
|
||||
"""
|
||||
if not 0 <= wire_type <= _WIRETYPE_MAX:
|
||||
raise message.EncodeError('Unknown wire type: %d' % wire_type)
|
||||
return (field_number << TAG_TYPE_BITS) | wire_type
|
||||
|
||||
|
||||
def UnpackTag(tag):
|
||||
"""The inverse of PackTag(). Given an unsigned 32-bit number,
|
||||
returns a (field_number, wire_type) tuple.
|
||||
"""
|
||||
return (tag >> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
|
||||
|
||||
|
||||
def ZigZagEncode(value):
|
||||
"""ZigZag Transform: Encodes signed integers so that they can be
|
||||
effectively used with varint encoding. See wire_format.h for
|
||||
more details.
|
||||
"""
|
||||
if value >= 0:
|
||||
return value << 1
|
||||
return (value << 1) ^ (~0)
|
||||
|
||||
|
||||
def ZigZagDecode(value):
|
||||
"""Inverse of ZigZagEncode()."""
|
||||
if not value & 0x1:
|
||||
return value >> 1
|
||||
return (value >> 1) ^ (~0)
|
||||
|
||||
|
||||
|
||||
# The *ByteSize() functions below return the number of bytes required to
|
||||
# serialize "field number + type" information and then serialize the value.
|
||||
|
||||
|
||||
def Int32ByteSize(field_number, int32):
|
||||
return Int64ByteSize(field_number, int32)
|
||||
|
||||
|
||||
def Int32ByteSizeNoTag(int32):
|
||||
return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
|
||||
|
||||
|
||||
def Int64ByteSize(field_number, int64):
|
||||
# Have to convert to uint before calling UInt64ByteSize().
|
||||
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
|
||||
|
||||
|
||||
def UInt32ByteSize(field_number, uint32):
|
||||
return UInt64ByteSize(field_number, uint32)
|
||||
|
||||
|
||||
def UInt64ByteSize(field_number, uint64):
|
||||
return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
|
||||
|
||||
|
||||
def SInt32ByteSize(field_number, int32):
|
||||
return UInt32ByteSize(field_number, ZigZagEncode(int32))
|
||||
|
||||
|
||||
def SInt64ByteSize(field_number, int64):
|
||||
return UInt64ByteSize(field_number, ZigZagEncode(int64))
|
||||
|
||||
|
||||
def Fixed32ByteSize(field_number, fixed32):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def Fixed64ByteSize(field_number, fixed64):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def SFixed32ByteSize(field_number, sfixed32):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def SFixed64ByteSize(field_number, sfixed64):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def FloatByteSize(field_number, flt):
|
||||
return TagByteSize(field_number) + 4
|
||||
|
||||
|
||||
def DoubleByteSize(field_number, double):
|
||||
return TagByteSize(field_number) + 8
|
||||
|
||||
|
||||
def BoolByteSize(field_number, b):
|
||||
return TagByteSize(field_number) + 1
|
||||
|
||||
|
||||
def EnumByteSize(field_number, enum):
|
||||
return UInt32ByteSize(field_number, enum)
|
||||
|
||||
|
||||
def StringByteSize(field_number, string):
|
||||
return BytesByteSize(field_number, string.encode('utf-8'))
|
||||
|
||||
|
||||
def BytesByteSize(field_number, b):
|
||||
return (TagByteSize(field_number)
|
||||
+ _VarUInt64ByteSizeNoTag(len(b))
|
||||
+ len(b))
|
||||
|
||||
|
||||
def GroupByteSize(field_number, message):
|
||||
return (2 * TagByteSize(field_number) # START and END group.
|
||||
+ message.ByteSize())
|
||||
|
||||
|
||||
def MessageByteSize(field_number, message):
|
||||
return (TagByteSize(field_number)
|
||||
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
|
||||
+ message.ByteSize())
|
||||
|
||||
|
||||
def MessageSetItemByteSize(field_number, msg):
|
||||
# First compute the sizes of the tags.
|
||||
# There are 2 tags for the beginning and ending of the repeated group, that
|
||||
# is field number 1, one with field number 2 (type_id) and one with field
|
||||
# number 3 (message).
|
||||
total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
|
||||
|
||||
# Add the number of bytes for type_id.
|
||||
total_size += _VarUInt64ByteSizeNoTag(field_number)
|
||||
|
||||
message_size = msg.ByteSize()
|
||||
|
||||
# The number of bytes for encoding the length of the message.
|
||||
total_size += _VarUInt64ByteSizeNoTag(message_size)
|
||||
|
||||
# The size of the message.
|
||||
total_size += message_size
|
||||
return total_size
|
||||
|
||||
|
||||
def TagByteSize(field_number):
|
||||
"""Returns the bytes required to serialize a tag with this field number."""
|
||||
# Just pass in type 0, since the type won't affect the tag+type size.
|
||||
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
|
||||
|
||||
|
||||
# Private helper function for the *ByteSize() functions above.
|
||||
|
||||
def _VarUInt64ByteSizeNoTag(uint64):
|
||||
"""Returns the number of bytes required to serialize a single varint
|
||||
using boundary value comparisons. (unrolled loop optimization -WPierce)
|
||||
uint64 must be unsigned.
|
||||
"""
|
||||
if uint64 <= 0x7f: return 1
|
||||
if uint64 <= 0x3fff: return 2
|
||||
if uint64 <= 0x1fffff: return 3
|
||||
if uint64 <= 0xfffffff: return 4
|
||||
if uint64 <= 0x7ffffffff: return 5
|
||||
if uint64 <= 0x3ffffffffff: return 6
|
||||
if uint64 <= 0x1ffffffffffff: return 7
|
||||
if uint64 <= 0xffffffffffffff: return 8
|
||||
if uint64 <= 0x7fffffffffffffff: return 9
|
||||
if uint64 > UINT64_MAX:
|
||||
raise message.EncodeError('Value out of range: %d' % uint64)
|
||||
return 10
|
||||
|
||||
|
||||
NON_PACKABLE_TYPES = (
|
||||
descriptor.FieldDescriptor.TYPE_STRING,
|
||||
descriptor.FieldDescriptor.TYPE_GROUP,
|
||||
descriptor.FieldDescriptor.TYPE_MESSAGE,
|
||||
descriptor.FieldDescriptor.TYPE_BYTES
|
||||
)
|
||||
|
||||
|
||||
def IsTypePackable(field_type):
|
||||
"""Return true iff packable = true is valid for fields of this type.
|
||||
|
||||
Args:
|
||||
field_type: a FieldDescriptor::Type value.
|
||||
|
||||
Returns:
|
||||
True iff fields of this type are packable.
|
||||
"""
|
||||
return field_type not in NON_PACKABLE_TYPES
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,448 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
# TODO: We should just make these methods all "pure-virtual" and move
|
||||
# all implementation out, into reflection.py for now.
|
||||
|
||||
|
||||
"""Contains an abstract base class for protocol messages."""
|
||||
|
||||
__author__ = 'robinson@google.com (Will Robinson)'
|
||||
|
||||
_INCONSISTENT_MESSAGE_ATTRIBUTES = ('Extensions',)
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
"""Base error type for this module."""
|
||||
pass
|
||||
|
||||
|
||||
class DecodeError(Error):
|
||||
"""Exception raised when deserializing messages."""
|
||||
pass
|
||||
|
||||
|
||||
class EncodeError(Error):
|
||||
"""Exception raised when serializing messages."""
|
||||
pass
|
||||
|
||||
|
||||
class Message(object):
|
||||
|
||||
"""Abstract base class for protocol messages.
|
||||
|
||||
Protocol message classes are almost always generated by the protocol
|
||||
compiler. These generated types subclass Message and implement the methods
|
||||
shown below.
|
||||
"""
|
||||
|
||||
# TODO: Link to an HTML document here.
|
||||
|
||||
# TODO: Document that instances of this class will also
|
||||
# have an Extensions attribute with __getitem__ and __setitem__.
|
||||
# Again, not sure how to best convey this.
|
||||
|
||||
# TODO: Document these fields and methods.
|
||||
|
||||
__slots__ = []
|
||||
|
||||
#: The :class:`google.protobuf.Descriptor`
|
||||
# for this message type.
|
||||
DESCRIPTOR = None
|
||||
|
||||
def __deepcopy__(self, memo=None):
|
||||
clone = type(self)()
|
||||
clone.MergeFrom(self)
|
||||
return clone
|
||||
|
||||
def __dir__(self):
|
||||
"""Provides the list of all accessible Message attributes."""
|
||||
message_attributes = set(super().__dir__())
|
||||
|
||||
# TODO: Remove this once the UPB implementation is improved.
|
||||
# The UPB proto implementation currently doesn't provide proto fields as
|
||||
# attributes and they have to added.
|
||||
if self.DESCRIPTOR is not None:
|
||||
for field in self.DESCRIPTOR.fields:
|
||||
message_attributes.add(field.name)
|
||||
|
||||
# The Fast C++ proto implementation provides inaccessible attributes that
|
||||
# have to be removed.
|
||||
for attribute in _INCONSISTENT_MESSAGE_ATTRIBUTES:
|
||||
if attribute not in message_attributes:
|
||||
continue
|
||||
try:
|
||||
getattr(self, attribute)
|
||||
except AttributeError:
|
||||
message_attributes.remove(attribute)
|
||||
|
||||
return sorted(message_attributes)
|
||||
|
||||
def __eq__(self, other_msg):
|
||||
"""Recursively compares two messages by value and structure."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __ne__(self, other_msg):
|
||||
# Can't just say self != other_msg, since that would infinitely recurse. :)
|
||||
return not self == other_msg
|
||||
|
||||
def __hash__(self):
|
||||
raise TypeError('unhashable object')
|
||||
|
||||
def __str__(self):
|
||||
"""Outputs a human-readable representation of the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __unicode__(self):
|
||||
"""Outputs a human-readable representation of the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __contains__(self, field_name_or_key):
|
||||
"""Checks if a certain field is set for the message.
|
||||
|
||||
Has presence fields return true if the field is set, false if the field is
|
||||
not set. Fields without presence do raise `ValueError` (this includes
|
||||
repeated fields, map fields, and implicit presence fields).
|
||||
|
||||
If field_name is not defined in the message descriptor, `ValueError` will
|
||||
be raised.
|
||||
Note: WKT Struct checks if the key is contained in fields. ListValue checks
|
||||
if the item is contained in the list.
|
||||
|
||||
Args:
|
||||
field_name_or_key: For Struct, the key (str) of the fields map. For
|
||||
ListValue, any type that may be contained in the list. For other
|
||||
messages, name of the field (str) to check for presence.
|
||||
|
||||
Returns:
|
||||
bool: For Struct, whether the item is contained in fields. For ListValue,
|
||||
whether the item is contained in the list. For other message,
|
||||
whether a value has been set for the named field.
|
||||
|
||||
Raises:
|
||||
ValueError: For normal messages, if the `field_name_or_key` is not a
|
||||
member of this message or `field_name_or_key` is not a string.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def MergeFrom(self, other_msg):
|
||||
"""Merges the contents of the specified message into current message.
|
||||
|
||||
This method merges the contents of the specified message into the current
|
||||
message. Singular fields that are set in the specified message overwrite
|
||||
the corresponding fields in the current message. Repeated fields are
|
||||
appended. Singular sub-messages and groups are recursively merged.
|
||||
|
||||
Args:
|
||||
other_msg (Message): A message to merge into the current message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def CopyFrom(self, other_msg):
|
||||
"""Copies the content of the specified message into the current message.
|
||||
|
||||
The method clears the current message and then merges the specified
|
||||
message using MergeFrom.
|
||||
|
||||
Args:
|
||||
other_msg (Message): A message to copy into the current one.
|
||||
"""
|
||||
if self is other_msg:
|
||||
return
|
||||
self.Clear()
|
||||
self.MergeFrom(other_msg)
|
||||
|
||||
def Clear(self):
|
||||
"""Clears all data that was set in the message."""
|
||||
raise NotImplementedError
|
||||
|
||||
def SetInParent(self):
|
||||
"""Mark this as present in the parent.
|
||||
|
||||
This normally happens automatically when you assign a field of a
|
||||
sub-message, but sometimes you want to make the sub-message
|
||||
present while keeping it empty. If you find yourself using this,
|
||||
you may want to reconsider your design.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def IsInitialized(self):
|
||||
"""Checks if the message is initialized.
|
||||
|
||||
Returns:
|
||||
bool: The method returns True if the message is initialized (i.e. all of
|
||||
its required fields are set).
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
# TODO: MergeFromString() should probably return None and be
|
||||
# implemented in terms of a helper that returns the # of bytes read. Our
|
||||
# deserialization routines would use the helper when recursively
|
||||
# deserializing, but the end user would almost always just want the no-return
|
||||
# MergeFromString().
|
||||
|
||||
def MergeFromString(self, serialized):
|
||||
"""Merges serialized protocol buffer data into this message.
|
||||
|
||||
When we find a field in `serialized` that is already present
|
||||
in this message:
|
||||
|
||||
- If it's a "repeated" field, we append to the end of our list.
|
||||
- Else, if it's a scalar, we overwrite our field.
|
||||
- Else, (it's a nonrepeated composite), we recursively merge
|
||||
into the existing composite.
|
||||
|
||||
Args:
|
||||
serialized (bytes): Any object that allows us to call
|
||||
``memoryview(serialized)`` to access a string of bytes using the
|
||||
buffer interface.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes read from `serialized`.
|
||||
For non-group messages, this will always be `len(serialized)`,
|
||||
but for messages which are actually groups, this will
|
||||
generally be less than `len(serialized)`, since we must
|
||||
stop when we reach an ``END_GROUP`` tag. Note that if
|
||||
we *do* stop because of an ``END_GROUP`` tag, the number
|
||||
of bytes returned does not include the bytes
|
||||
for the ``END_GROUP`` tag information.
|
||||
|
||||
Raises:
|
||||
DecodeError: if the input cannot be parsed.
|
||||
"""
|
||||
# TODO: Document handling of unknown fields.
|
||||
# TODO: When we switch to a helper, this will return None.
|
||||
raise NotImplementedError
|
||||
|
||||
def ParseFromString(self, serialized):
|
||||
"""Parse serialized protocol buffer data in binary form into this message.
|
||||
|
||||
Like :func:`MergeFromString()`, except we clear the object first.
|
||||
|
||||
Raises:
|
||||
message.DecodeError if the input cannot be parsed.
|
||||
"""
|
||||
self.Clear()
|
||||
return self.MergeFromString(serialized)
|
||||
|
||||
def SerializeToString(self, **kwargs):
|
||||
"""Serializes the protocol message to a binary string.
|
||||
|
||||
Keyword Args:
|
||||
deterministic (bool): If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
A binary string representation of the message if all of the required
|
||||
fields in the message are set (i.e. the message is initialized).
|
||||
|
||||
Raises:
|
||||
EncodeError: if the message isn't initialized (see :func:`IsInitialized`).
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def SerializePartialToString(self, **kwargs):
|
||||
"""Serializes the protocol message to a binary string.
|
||||
|
||||
This method is similar to SerializeToString but doesn't check if the
|
||||
message is initialized.
|
||||
|
||||
Keyword Args:
|
||||
deterministic (bool): If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
bytes: A serialized representation of the partial message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
# TODO: Decide whether we like these better
|
||||
# than auto-generated has_foo() and clear_foo() methods
|
||||
# on the instances themselves. This way is less consistent
|
||||
# with C++, but it makes reflection-type access easier and
|
||||
# reduces the number of magically autogenerated things.
|
||||
#
|
||||
# TODO: Be sure to document (and test) exactly
|
||||
# which field names are accepted here. Are we case-sensitive?
|
||||
# What do we do with fields that share names with Python keywords
|
||||
# like 'lambda' and 'yield'?
|
||||
#
|
||||
# nnorwitz says:
|
||||
# """
|
||||
# Typically (in python), an underscore is appended to names that are
|
||||
# keywords. So they would become lambda_ or yield_.
|
||||
# """
|
||||
def ListFields(self):
|
||||
"""Returns a list of (FieldDescriptor, value) tuples for present fields.
|
||||
|
||||
A message field is non-empty if HasField() would return true. A singular
|
||||
primitive field is non-empty if HasField() would return true in proto2 or it
|
||||
is non zero in proto3. A repeated field is non-empty if it contains at least
|
||||
one element. The fields are ordered by field number.
|
||||
|
||||
Returns:
|
||||
list[tuple(FieldDescriptor, value)]: field descriptors and values
|
||||
for all fields in the message which are not empty. The values vary by
|
||||
field type.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def HasField(self, field_name):
|
||||
"""Checks if a certain field is set for the message.
|
||||
|
||||
For a oneof group, checks if any field inside is set. Note that if the
|
||||
field_name is not defined in the message descriptor, :exc:`ValueError` will
|
||||
be raised.
|
||||
|
||||
Args:
|
||||
field_name (str): The name of the field to check for presence.
|
||||
|
||||
Returns:
|
||||
bool: Whether a value has been set for the named field.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ClearField(self, field_name):
|
||||
"""Clears the contents of a given field.
|
||||
|
||||
Inside a oneof group, clears the field set. If the name neither refers to a
|
||||
defined field or oneof group, :exc:`ValueError` is raised.
|
||||
|
||||
Args:
|
||||
field_name (str): The name of the field to check for presence.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def WhichOneof(self, oneof_group):
|
||||
"""Returns the name of the field that is set inside a oneof group.
|
||||
|
||||
If no field is set, returns None.
|
||||
|
||||
Args:
|
||||
oneof_group (str): the name of the oneof group to check.
|
||||
|
||||
Returns:
|
||||
str or None: The name of the group that is set, or None.
|
||||
|
||||
Raises:
|
||||
ValueError: no group with the given name exists
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def HasExtension(self, field_descriptor):
|
||||
"""Checks if a certain extension is present for this message.
|
||||
|
||||
Extensions are retrieved using the :attr:`Extensions` mapping (if present).
|
||||
|
||||
Args:
|
||||
field_descriptor: The field descriptor for the extension to check.
|
||||
|
||||
Returns:
|
||||
bool: Whether the extension is present for this message.
|
||||
|
||||
Raises:
|
||||
KeyError: if the extension is repeated. Similar to repeated fields,
|
||||
there is no separate notion of presence: a "not present" repeated
|
||||
extension is an empty list.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ClearExtension(self, field_descriptor):
|
||||
"""Clears the contents of a given extension.
|
||||
|
||||
Args:
|
||||
field_descriptor: The field descriptor for the extension to clear.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def UnknownFields(self):
|
||||
"""Returns the UnknownFieldSet.
|
||||
|
||||
Returns:
|
||||
UnknownFieldSet: The unknown fields stored in this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def DiscardUnknownFields(self):
|
||||
"""Clears all fields in the :class:`UnknownFieldSet`.
|
||||
|
||||
This operation is recursive for nested message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def ByteSize(self):
|
||||
"""Returns the serialized size of this message.
|
||||
|
||||
Recursively calls ByteSize() on all contained messages.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes required to serialize this message.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
@classmethod
|
||||
def FromString(cls, s):
|
||||
raise NotImplementedError
|
||||
|
||||
def _SetListener(self, message_listener):
|
||||
"""Internal method used by the protocol message implementation.
|
||||
Clients should not call this directly.
|
||||
|
||||
Sets a listener that this message will call on certain state transitions.
|
||||
|
||||
The purpose of this method is to register back-edges from children to
|
||||
parents at runtime, for the purpose of setting "has" bits and
|
||||
byte-size-dirty bits in the parent and ancestor objects whenever a child or
|
||||
descendant object is modified.
|
||||
|
||||
If the client wants to disconnect this Message from the object tree, she
|
||||
explicitly sets callback to None.
|
||||
|
||||
If message_listener is None, unregisters any existing listener. Otherwise,
|
||||
message_listener must implement the MessageListener interface in
|
||||
internal/message_listener.py, and we discard any listener registered
|
||||
via a previous _SetListener() call.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def __getstate__(self):
|
||||
"""Support the pickle protocol."""
|
||||
return dict(serialized=self.SerializePartialToString())
|
||||
|
||||
def __setstate__(self, state):
|
||||
"""Support the pickle protocol."""
|
||||
self.__init__()
|
||||
serialized = state['serialized']
|
||||
# On Python 3, using encoding='latin1' is required for unpickling
|
||||
# protos pickled by Python 2.
|
||||
if not isinstance(serialized, bytes):
|
||||
serialized = serialized.encode('latin1')
|
||||
self.ParseFromString(serialized)
|
||||
|
||||
def __reduce__(self):
|
||||
message_descriptor = self.DESCRIPTOR
|
||||
if message_descriptor.containing_type is None:
|
||||
return type(self), (), self.__getstate__()
|
||||
# the message type must be nested.
|
||||
# Python does not pickle nested classes; use the symbol_database on the
|
||||
# receiving end.
|
||||
container = message_descriptor
|
||||
return (_InternalConstructMessage, (container.full_name,),
|
||||
self.__getstate__())
|
||||
|
||||
|
||||
def _InternalConstructMessage(full_name):
|
||||
"""Constructs a nested message."""
|
||||
from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top
|
||||
|
||||
return symbol_database.Default().GetSymbol(full_name)()
|
||||
+190
@@ -0,0 +1,190 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Provides a factory class for generating dynamic messages.
|
||||
|
||||
The easiest way to use this class is if you have access to the FileDescriptor
|
||||
protos containing the messages you want to create you can just do the following:
|
||||
|
||||
message_classes = message_factory.GetMessages(iterable_of_file_descriptors)
|
||||
my_proto_instance = message_classes['some.proto.package.MessageName']()
|
||||
"""
|
||||
|
||||
__author__ = 'matthewtoia@google.com (Matt Toia)'
|
||||
|
||||
import warnings
|
||||
|
||||
from google.protobuf import descriptor_pool
|
||||
from google.protobuf import message
|
||||
from google.protobuf.internal import api_implementation
|
||||
|
||||
if api_implementation.Type() == 'python':
|
||||
from google.protobuf.internal import python_message as message_impl
|
||||
else:
|
||||
from google.protobuf.pyext import cpp_message as message_impl # pylint: disable=g-import-not-at-top
|
||||
|
||||
|
||||
# The type of all Message classes.
|
||||
_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType
|
||||
|
||||
|
||||
def GetMessageClass(descriptor):
|
||||
"""Obtains a proto2 message class based on the passed in descriptor.
|
||||
|
||||
Passing a descriptor with a fully qualified name matching a previous
|
||||
invocation will cause the same class to be returned.
|
||||
|
||||
Args:
|
||||
descriptor: The descriptor to build from.
|
||||
|
||||
Returns:
|
||||
A class describing the passed in descriptor.
|
||||
"""
|
||||
concrete_class = getattr(descriptor, '_concrete_class', None)
|
||||
if concrete_class:
|
||||
return concrete_class
|
||||
return _InternalCreateMessageClass(descriptor)
|
||||
|
||||
|
||||
def GetMessageClassesForFiles(files, pool):
|
||||
"""Gets all the messages from specified files.
|
||||
|
||||
This will find and resolve dependencies, failing if the descriptor
|
||||
pool cannot satisfy them.
|
||||
|
||||
This will not return the classes for nested types within those classes, for
|
||||
those, use GetMessageClass() on the nested types within their containing
|
||||
messages.
|
||||
|
||||
For example, for the message:
|
||||
|
||||
message NestedTypeMessage {
|
||||
message NestedType {
|
||||
string data = 1;
|
||||
}
|
||||
NestedType nested = 1;
|
||||
}
|
||||
|
||||
NestedTypeMessage will be in the result, but not
|
||||
NestedTypeMessage.NestedType.
|
||||
|
||||
Args:
|
||||
files: The file names to extract messages from.
|
||||
pool: The descriptor pool to find the files including the dependent files.
|
||||
|
||||
Returns:
|
||||
A dictionary mapping proto names to the message classes.
|
||||
"""
|
||||
result = {}
|
||||
for file_name in files:
|
||||
file_desc = pool.FindFileByName(file_name)
|
||||
for desc in file_desc.message_types_by_name.values():
|
||||
result[desc.full_name] = GetMessageClass(desc)
|
||||
|
||||
# While the extension FieldDescriptors are created by the descriptor pool,
|
||||
# the python classes created in the factory need them to be registered
|
||||
# explicitly, which is done below.
|
||||
#
|
||||
# The call to RegisterExtension will specifically check if the
|
||||
# extension was already registered on the object and either
|
||||
# ignore the registration if the original was the same, or raise
|
||||
# an error if they were different.
|
||||
|
||||
for extension in file_desc.extensions_by_name.values():
|
||||
_ = GetMessageClass(extension.containing_type)
|
||||
if api_implementation.Type() != 'python':
|
||||
# TODO: Remove this check here. Duplicate extension
|
||||
# register check should be in descriptor_pool.
|
||||
if extension is not pool.FindExtensionByNumber(
|
||||
extension.containing_type, extension.number
|
||||
):
|
||||
raise ValueError('Double registration of Extensions')
|
||||
# Recursively load protos for extension field, in order to be able to
|
||||
# fully represent the extension. This matches the behavior for regular
|
||||
# fields too.
|
||||
if extension.message_type:
|
||||
GetMessageClass(extension.message_type)
|
||||
return result
|
||||
|
||||
|
||||
def _InternalCreateMessageClass(descriptor):
|
||||
"""Builds a proto2 message class based on the passed in descriptor.
|
||||
|
||||
Args:
|
||||
descriptor: The descriptor to build from.
|
||||
|
||||
Returns:
|
||||
A class describing the passed in descriptor.
|
||||
"""
|
||||
descriptor_name = descriptor.name
|
||||
result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE(
|
||||
descriptor_name,
|
||||
(message.Message,),
|
||||
{
|
||||
'DESCRIPTOR': descriptor,
|
||||
# If module not set, it wrongly points to message_factory module.
|
||||
'__module__': None,
|
||||
},
|
||||
)
|
||||
for field in descriptor.fields:
|
||||
if field.message_type:
|
||||
GetMessageClass(field.message_type)
|
||||
|
||||
for extension in result_class.DESCRIPTOR.extensions:
|
||||
extended_class = GetMessageClass(extension.containing_type)
|
||||
if api_implementation.Type() != 'python':
|
||||
# TODO: Remove this check here. Duplicate extension
|
||||
# register check should be in descriptor_pool.
|
||||
pool = extension.containing_type.file.pool
|
||||
if extension is not pool.FindExtensionByNumber(
|
||||
extension.containing_type, extension.number
|
||||
):
|
||||
raise ValueError('Double registration of Extensions')
|
||||
if extension.message_type:
|
||||
GetMessageClass(extension.message_type)
|
||||
return result_class
|
||||
|
||||
|
||||
# Deprecated. Please use GetMessageClass() or GetMessageClassesForFiles()
|
||||
# method above instead.
|
||||
class MessageFactory(object):
|
||||
"""Factory for creating Proto2 messages from descriptors in a pool."""
|
||||
|
||||
def __init__(self, pool=None):
|
||||
"""Initializes a new factory."""
|
||||
self.pool = pool or descriptor_pool.DescriptorPool()
|
||||
|
||||
|
||||
def GetMessages(file_protos, pool=None):
|
||||
"""Builds a dictionary of all the messages available in a set of files.
|
||||
|
||||
Args:
|
||||
file_protos: Iterable of FileDescriptorProto to build messages out of.
|
||||
pool: The descriptor pool to add the file protos.
|
||||
|
||||
Returns:
|
||||
A dictionary mapping proto names to the message classes. This will include
|
||||
any dependent messages as well as any messages defined in the same file as
|
||||
a specified message.
|
||||
"""
|
||||
# The cpp implementation of the protocol buffer library requires to add the
|
||||
# message in topological order of the dependency graph.
|
||||
des_pool = pool or descriptor_pool.DescriptorPool()
|
||||
file_by_name = {file_proto.name: file_proto for file_proto in file_protos}
|
||||
|
||||
def _AddFile(file_proto):
|
||||
for dependency in file_proto.dependency:
|
||||
if dependency in file_by_name:
|
||||
# Remove from elements to be visited, in order to cut cycles.
|
||||
_AddFile(file_by_name.pop(dependency))
|
||||
des_pool.Add(file_proto)
|
||||
|
||||
while file_by_name:
|
||||
_AddFile(file_by_name.popitem()[1])
|
||||
return GetMessageClassesForFiles(
|
||||
[file_proto.name for file_proto in file_protos], des_pool
|
||||
)
|
||||
@@ -0,0 +1,153 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Contains the Nextgen Pythonic protobuf APIs."""
|
||||
|
||||
import io
|
||||
from typing import Text, Type, TypeVar
|
||||
|
||||
from google.protobuf.internal import decoder
|
||||
from google.protobuf.internal import encoder
|
||||
from google.protobuf.message import Message
|
||||
|
||||
_MESSAGE = TypeVar('_MESSAGE', bound='Message')
|
||||
|
||||
|
||||
def serialize(message: _MESSAGE, deterministic: bool = None) -> bytes:
|
||||
"""Return the serialized proto.
|
||||
|
||||
Args:
|
||||
message: The proto message to be serialized.
|
||||
deterministic: If true, requests deterministic serialization
|
||||
of the protobuf, with predictable ordering of map keys.
|
||||
|
||||
Returns:
|
||||
A binary bytes representation of the message.
|
||||
"""
|
||||
return message.SerializeToString(deterministic=deterministic)
|
||||
|
||||
|
||||
def parse(message_class: Type[_MESSAGE], payload: bytes) -> _MESSAGE:
|
||||
"""Given a serialized data in binary form, deserialize it into a Message.
|
||||
|
||||
Args:
|
||||
message_class: The message meta class.
|
||||
payload: A serialized bytes in binary form.
|
||||
|
||||
Returns:
|
||||
A new message deserialized from payload.
|
||||
"""
|
||||
new_message = message_class()
|
||||
new_message.ParseFromString(payload)
|
||||
return new_message
|
||||
|
||||
|
||||
def serialize_length_prefixed(message: _MESSAGE, output: io.BytesIO) -> None:
|
||||
"""Writes the size of the message as a varint and the serialized message.
|
||||
|
||||
Writes the size of the message as a varint and then the serialized message.
|
||||
This allows more data to be written to the output after the message. Use
|
||||
parse_length_prefixed to parse messages written by this method.
|
||||
|
||||
The output stream must be buffered, e.g. using
|
||||
https://docs.python.org/3/library/io.html#buffered-streams.
|
||||
|
||||
Example usage:
|
||||
out = io.BytesIO()
|
||||
for msg in message_list:
|
||||
proto.serialize_length_prefixed(msg, out)
|
||||
|
||||
Args:
|
||||
message: The protocol buffer message that should be serialized.
|
||||
output: BytesIO or custom buffered IO that data should be written to.
|
||||
"""
|
||||
size = message.ByteSize()
|
||||
encoder._VarintEncoder()(output.write, size)
|
||||
out_size = output.write(serialize(message))
|
||||
|
||||
if out_size != size:
|
||||
raise TypeError(
|
||||
'Failed to write complete message (wrote: %d, expected: %d)'
|
||||
'. Ensure output is using buffered IO.' % (out_size, size)
|
||||
)
|
||||
|
||||
|
||||
def parse_length_prefixed(
|
||||
message_class: Type[_MESSAGE], input_bytes: io.BytesIO
|
||||
) -> _MESSAGE:
|
||||
"""Parse a message from input_bytes.
|
||||
|
||||
Args:
|
||||
message_class: The protocol buffer message class that parser should parse.
|
||||
input_bytes: A buffered input.
|
||||
|
||||
Example usage:
|
||||
while True:
|
||||
msg = proto.parse_length_prefixed(message_class, input_bytes)
|
||||
if msg is None:
|
||||
break
|
||||
...
|
||||
|
||||
Returns:
|
||||
A parsed message if successful. None if input_bytes is at EOF.
|
||||
"""
|
||||
size = decoder._DecodeVarint(input_bytes)
|
||||
if size is None:
|
||||
# It is the end of buffered input. See example usage in the
|
||||
# API description.
|
||||
return None
|
||||
|
||||
message = message_class()
|
||||
|
||||
if size == 0:
|
||||
return message
|
||||
|
||||
parsed_size = message.ParseFromString(input_bytes.read(size))
|
||||
if parsed_size != size:
|
||||
raise ValueError(
|
||||
'Truncated message or non-buffered input_bytes: '
|
||||
'Expected {0} bytes but only {1} bytes parsed for '
|
||||
'{2}.'.format(size, parsed_size, message.DESCRIPTOR.name)
|
||||
)
|
||||
return message
|
||||
|
||||
|
||||
def byte_size(message: Message) -> int:
|
||||
"""Returns the serialized size of this message.
|
||||
|
||||
Args:
|
||||
message: A proto message.
|
||||
|
||||
Returns:
|
||||
int: The number of bytes required to serialize this message.
|
||||
"""
|
||||
return message.ByteSize()
|
||||
|
||||
|
||||
def clear_message(message: Message) -> None:
|
||||
"""Clears all data that was set in the message.
|
||||
|
||||
Args:
|
||||
message: The proto message to be cleared.
|
||||
"""
|
||||
message.Clear()
|
||||
|
||||
|
||||
def clear_field(message: Message, field_name: Text) -> None:
|
||||
"""Clears the contents of a given field.
|
||||
|
||||
Inside a oneof group, clears the field set. If the name neither refers to a
|
||||
defined field or oneof group, :exc:`ValueError` is raised.
|
||||
|
||||
Args:
|
||||
message: The proto message.
|
||||
field_name (str): The name of the field to be cleared.
|
||||
|
||||
Raises:
|
||||
ValueError: if the `field_name` is not a member of this message.
|
||||
"""
|
||||
message.ClearField(field_name)
|
||||
@@ -0,0 +1,111 @@
|
||||
# Protocol Buffers - Google's data interchange format
|
||||
# Copyright 2008 Google Inc. All rights reserved.
|
||||
#
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file or at
|
||||
# https://developers.google.com/open-source/licenses/bsd
|
||||
|
||||
"""Dynamic Protobuf class creator."""
|
||||
|
||||
from collections import OrderedDict
|
||||
import hashlib
|
||||
import os
|
||||
|
||||
from google.protobuf import descriptor_pb2
|
||||
from google.protobuf import descriptor
|
||||
from google.protobuf import descriptor_pool
|
||||
from google.protobuf import message_factory
|
||||
|
||||
|
||||
def _GetMessageFromFactory(pool, full_name):
|
||||
"""Get a proto class from the MessageFactory by name.
|
||||
|
||||
Args:
|
||||
pool: a descriptor pool.
|
||||
full_name: str, the fully qualified name of the proto type.
|
||||
Returns:
|
||||
A class, for the type identified by full_name.
|
||||
Raises:
|
||||
KeyError, if the proto is not found in the factory's descriptor pool.
|
||||
"""
|
||||
proto_descriptor = pool.FindMessageTypeByName(full_name)
|
||||
proto_cls = message_factory.GetMessageClass(proto_descriptor)
|
||||
return proto_cls
|
||||
|
||||
|
||||
def MakeSimpleProtoClass(fields, full_name=None, pool=None):
|
||||
"""Create a Protobuf class whose fields are basic types.
|
||||
|
||||
Note: this doesn't validate field names!
|
||||
|
||||
Args:
|
||||
fields: dict of {name: field_type} mappings for each field in the proto. If
|
||||
this is an OrderedDict the order will be maintained, otherwise the
|
||||
fields will be sorted by name.
|
||||
full_name: optional str, the fully-qualified name of the proto type.
|
||||
pool: optional DescriptorPool instance.
|
||||
Returns:
|
||||
a class, the new protobuf class with a FileDescriptor.
|
||||
"""
|
||||
pool_instance = pool or descriptor_pool.DescriptorPool()
|
||||
if full_name is not None:
|
||||
try:
|
||||
proto_cls = _GetMessageFromFactory(pool_instance, full_name)
|
||||
return proto_cls
|
||||
except KeyError:
|
||||
# The factory's DescriptorPool doesn't know about this class yet.
|
||||
pass
|
||||
|
||||
# Get a list of (name, field_type) tuples from the fields dict. If fields was
|
||||
# an OrderedDict we keep the order, but otherwise we sort the field to ensure
|
||||
# consistent ordering.
|
||||
field_items = fields.items()
|
||||
if not isinstance(fields, OrderedDict):
|
||||
field_items = sorted(field_items)
|
||||
|
||||
# Use a consistent file name that is unlikely to conflict with any imported
|
||||
# proto files.
|
||||
fields_hash = hashlib.sha1()
|
||||
for f_name, f_type in field_items:
|
||||
fields_hash.update(f_name.encode('utf-8'))
|
||||
fields_hash.update(str(f_type).encode('utf-8'))
|
||||
proto_file_name = fields_hash.hexdigest() + '.proto'
|
||||
|
||||
# If the proto is anonymous, use the same hash to name it.
|
||||
if full_name is None:
|
||||
full_name = ('net.proto2.python.public.proto_builder.AnonymousProto_' +
|
||||
fields_hash.hexdigest())
|
||||
try:
|
||||
proto_cls = _GetMessageFromFactory(pool_instance, full_name)
|
||||
return proto_cls
|
||||
except KeyError:
|
||||
# The factory's DescriptorPool doesn't know about this class yet.
|
||||
pass
|
||||
|
||||
# This is the first time we see this proto: add a new descriptor to the pool.
|
||||
pool_instance.Add(
|
||||
_MakeFileDescriptorProto(proto_file_name, full_name, field_items))
|
||||
return _GetMessageFromFactory(pool_instance, full_name)
|
||||
|
||||
|
||||
def _MakeFileDescriptorProto(proto_file_name, full_name, field_items):
|
||||
"""Populate FileDescriptorProto for MessageFactory's DescriptorPool."""
|
||||
package, name = full_name.rsplit('.', 1)
|
||||
file_proto = descriptor_pb2.FileDescriptorProto()
|
||||
file_proto.name = os.path.join(package.replace('.', '/'), proto_file_name)
|
||||
file_proto.package = package
|
||||
desc_proto = file_proto.message_type.add()
|
||||
desc_proto.name = name
|
||||
for f_number, (f_name, f_type) in enumerate(field_items, 1):
|
||||
field_proto = desc_proto.field.add()
|
||||
field_proto.name = f_name
|
||||
# # If the number falls in the reserved range, reassign it to the correct
|
||||
# # number after the range.
|
||||
if f_number >= descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER:
|
||||
f_number += (
|
||||
descriptor.FieldDescriptor.LAST_RESERVED_FIELD_NUMBER -
|
||||
descriptor.FieldDescriptor.FIRST_RESERVED_FIELD_NUMBER + 1)
|
||||
field_proto.number = f_number
|
||||
field_proto.label = descriptor_pb2.FieldDescriptorProto.LABEL_OPTIONAL
|
||||
field_proto.type = f_type
|
||||
return file_proto
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user