Protobuf prototype

This commit is contained in:
Konstantin Lampalzer 2022-03-18 18:11:16 +01:00
parent e277a83c5f
commit 9b567b8c6c
119 changed files with 8599 additions and 0 deletions

2
docs/.gitignore vendored
View file

@ -1,2 +0,0 @@
build
logs.db

View file

@ -1,20 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View file

@ -1,35 +0,0 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

View file

@ -1,60 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
sys.setrecursionlimit(1500)
os.environ["EXTENSIVE_LOGGING"] = "False"
# -- Project information -----------------------------------------------------
project = 'CompLib'
copyright = '2022, Verein zur Förderung von Wissenschaft und Technik an Schulen (F-WuTS)'
author = 'robo4you'
# The full version, including alpha/beta/rc tags
release = '0.2.3'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx_rtd_theme'
]
autodoc_mock_imports = ["smbus", "compLib.PCA9685", "RPi",
"pigpio", "flask", "apt", "influxdb_client"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 188 KiB

View file

@ -1,19 +0,0 @@
.. image:: images/compair-logo-white.svg
Competition Robot Library
#############################
.. toctree::
:maxdepth: 2
:caption: Contents:
Contents
*********
.. toctree::
:maxdepth: 5
:glob:
self
other/usage
lib/*

View file

@ -1,107 +0,0 @@
.. _lib_api:
Api
****
Seeding
========
.. autoclass:: compLib.Api.Seeding
:members:
Double Elimination
===================
.. autoclass:: compLib.Api.DoubleElim
:members:
Position
========
.. autoclass:: compLib.Api.Position
:members:
Examples
========
Calling Seeding API
---------------------
.. code-block:: python
from compLib.Api import Seeding
zones, code = Seeding.get_delivery()
if code == 403:
print(f"I am not in the correct zone to make that request!")
else:
print(f"First we need to go to zone {zone[0]}")
# put code here to follow line and drive to the zone
print(f"Now we need to go to zone {zone[1]}")
# put code here to follow line and drive to the next zone
print(f"Now we need to go to zone {zone[2]}")
# put code here to follow line and drive to the last zone
print(f"We delivered all packages, hopefully we scored some points!")
.. code-block:: python
from compLib.Api import Seeding
package, code = Seeding.get_cargo("yellow")
if code == 403:
print(f"I am not in the correct zone to make that request!")
elif code == 404:
print(f"I am in the correct zone, but there is no yellow package here.")
elif code == 413:
print(f"I am in the correct zone, but I already have two packages loaded.")
else code == 200:
print(f"The {package['color']} has been picked up!")
Calling Double Elimination API
----------------------------------
.. code-block:: python
from compLib.Api import DoubleElim
position, status = DoubleElim.get_position()
print(f"Position of my robot is: x={position.x}, y={position.y} and rotation is: {position.degrees}, the server responded with status code: {status}")
goal, status = DoubleElim.get_goal()
print(f"Goal is at: x={goal.x}, y={goal.y}, the server responded with status code: {status}")
.. code-block:: python
from compLib.Api import DoubleElim
import time
# function which waits for the game to be started (you should include this in your double elimination program)
def wait_for_start():
_, status = DoubleElim.get_position()
while status == 503:
time.sleep(0.1)
_, status = DoubleElim.get_position()
wait_for_start()
print(f"Game has started, lets score some points!!")
position, status = DoubleElim.get_position()
print(f"Position of my robot is: x={position.x}, y={position.y} and rotation is: {position.degrees}, the server responded with status code: {status}")
opponent_position, status = DoubleElim.get_opponent()
print(f"Position of the opponents robot is: x={opponent_position.x}, y={opponent_position.y} and rotation is: {opponent_position.degrees}, the server responded with status code: {status}")
goal, status = DoubleElim.get_goal()
print(f"Goal is at: x={goal.x}, y={goal.y}, the server responded with status code: {status}")
items, status = DoubleElim.get_items()
print(f"There are currently {len(items)} on the gameboard: {items}, the server responded with status code: {status}")
score, status = DoubleElim.get_score()
print(f"The current score of the game is {score}, the server responded with status code: {status}")
meteoroids, status = DoubleElim.get_meteoroids()
print(f"The current meteoroids in the game are {meteoroids}, the server responded with status code: {status}")
In this second example we wait until the game is started by the judges and then make all possible requests once. You should use the wait_for_start function in your double elimination program. If your robot starts too soon your run will not count!

View file

@ -1,83 +0,0 @@
.. _lib_vision:
Aruco
*******
Examples
=========
Recognizing ArUco tags
-------------------------
.. code-block:: python
import time
import cv2
from cv2 import aruco
from compLib import Vision
ARUCO_DICT = cv2.aruco.Dictionary_get(aruco.DICT_6X6_250)
ARUCO_PARAMETERS = aruco.DetectorParameters_create()
def getTagCenterFromFrame(id, frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, rejectedImgPoints = aruco.detectMarkers(gray, ARUCO_DICT, parameters = ARUCO_PARAMETERS)
frame = aruco.drawDetectedMarkers(frame.copy(), corners, ids)
if ids is None:
return frame, None, None
for tag_id, corner in zip(ids, corners):
if (tag_id[0] == id):
x, y = 0, 0
for i in range(4):
x += corner[0][i][0] * 0.25
y += corner[0][i][1] * 0.25
return frame, x, y
return frame, None, None
# Get the center from the aruco tag with the specified id
# in pixel coordinates (0-640, 0-480)
def getTagPosition(id):
frame = Vision.Streaming.get_frame()
frame, x, y = getTagCenterFromFrame(id, frame)
Vision.Streaming.publish_frame(frame)
return x, y
# Get the normalized center coordinates from the aruco tag
# with the specified id
# left is -1, right +1
# bottom is -1, top +1
def getNormalizedTagPosition(id):
frame = Vision.Streaming.get_frame()
frame, x, y = getTagCenterFromFrame(id, frame)
Vision.Streaming.publish_frame(frame)
if x is None or y is None:
return None, None
height, width = frame.shape[:2]
x = x / width * 2.0 - 1.0
y = -(y / height * 2.0 - 1.0)
return x, y
if __name__ == '__main__':
desiredID = 11
while True:
x, y = getNormalizedTagPosition(desiredID)
if x is not None:
print("X Coordinate: ", x)
else:
print("Tag not found")
This example shows how to recognize ArUco tags based on their id and position.
You can specify an ID of the tag you want to use and if it's found, the coordinates of the center are returned.
With the normalized function this is very easy: The x-coordinate is -1 on the left, 1 on the right and 0 in the center of the screen, same for y.
This way it is quite simple to act on the position of the tag.

View file

@ -1,23 +0,0 @@
.. _lib_display:
Display
*******
Class Documentation
====================
.. autoclass:: compLib.Display.Display
:members:
Examples
=========
Write a line to the display
---------------------------
.. code-block:: python
import time
from compLib.Display import Display
Display.write(1, "Hello World!")

View file

@ -1,10 +0,0 @@
.. _lib_encoder:
Encoder
*******
Class Documentation
====================
.. autoclass:: compLib.Encoder.Encoder
:members:

View file

@ -1,20 +0,0 @@
.. _lib_irsensor:
Infrared Sensor
****************
.. autoclass:: compLib.IRSensor.IRSensor
:members:
Examples
=========
Testing analog sensors
-------------------------
.. code-block:: python
from compLib import IRSensor
while True:
print ("left: {} middle: {} right: {}".format(IRSensor.read(1), IRSensor.read(3), IRSensor.read(5)))

View file

@ -1,79 +0,0 @@
.. _lib_Linefollower:
Linefollower Examples
*********************
Simple Linefollower
-------------------------
.. code-block:: python
from compLib.Motor import Motor
from compLib.Display import Display
from compLib.IRSensor import IRSensor
from compLib.Encoder import Encoder
import time
IRSensor.set(1, True)
IRSensor.set(2, True)
IRSensor.set(3, True)
IRSensor.set(4, True)
IRSensor.set(5, True)
DRIVE_SPEED = 75
COLOR_BREAK = 900
KP = 10.0
KD = 0.0
def drive(leftSpeed, rightSpeed):
rightSpeed *= -0.906
Motor.power(1, min(max(-100, rightSpeed), 100))
Motor.power(4, min(max(-100, leftSpeed), 100))
def follow(sleepTime = 0.1):
lastError = 0
sensorsBlack = 0
while sensorsBlack < 3:
sensorsBlack = 0
for i in range(1, 6):
if IRSensor.read(i) > COLOR_BREAK:
sensorsBlack += 1
error = lastError
if IRSensor.read(3) > COLOR_BREAK:
error = 0
elif IRSensor.read(1) > COLOR_BREAK:
error = -1.5
elif IRSensor.read(5) > COLOR_BREAK:
error = 1.5
elif IRSensor.read(2) > COLOR_BREAK:
error = -1
elif IRSensor.read(4) > COLOR_BREAK:
error = 1
elif error == 1.5:
error = 3
elif error == -1.5:
error = -3
lastError = error
adjustment = KP * error + KD * (error - lastError)
leftSpeed = DRIVE_SPEED + adjustment
rightSpeed = DRIVE_SPEED - adjustment
print(f"{leftSpeed} {rightSpeed} {adjustment} {error}")
drive(leftSpeed, rightSpeed)
drive(0, 0)
time.sleep(sleepTime)
def main():
follow()
follow()
follow()
follow()
follow(0.2)
main()

View file

@ -1,22 +0,0 @@
.. _lib_logging:
Logging
*******
Class Documentation
====================
.. autoclass:: compLib.LogstashLogging.Logging
:members:
Examples
=========
Turn up the logging
--------------------
.. code-block:: python
from compLib.LogstashLogging import Logging
Logging.set_debug()

View file

@ -1,23 +0,0 @@
.. _lib_motor:
Motor
******
Class Documentation
====================
.. autoclass:: compLib.Motor.Motor
:members:
Examples
=========
Driving straight (maybe)
-------------------------
.. code-block:: python
from compLib.Motor import Motor
Motor.power(1, -50)
Motor.power(4, 50)

View file

@ -1,50 +0,0 @@
.. _lib_odom:
Odometry
********
Class Documentation
====================
.. autoclass:: compLib.Odom.Odometry
:members:
.. autoclass:: compLib.Odom.Odom
:members:
Examples
=========
Getting actual distance driven
------------------------------
.. code-block:: python
import time
import math
from compLib.Motor import Motor
from compLib.Encoder import Encoder
from compLib.Odom import Odom, Odometry
# distance in meters
# speed in % of max speed
def drive_example(distance, speed):
Odom.update()
odom = Odom.get_odom()
while abs(odom.get_x()) < distance:
Odom.update()
odom = Odom.get_odom()
Motor.power(4, speed)
Motor.power(1, -speed)
print(f" Forward: {odom.get_x()} m")
print(f" Right: {odom.get_y()} m")
print(f" Turned: {math.degrees(odom.get_orientation())} degrees")
Motor.active_break(1)
Motor.active_break(4)
time.sleep(0.1)
Encoder.clear_all()
Odom.clear()

View file

@ -1,71 +0,0 @@
.. _lib_qc:
Quality Control
###############
Infrared Test
-------------------------
.. code-block:: python
from compLib.IRSensor import IRSensor
import time
IRSensor.set(1, True)
IRSensor.set(2, True)
IRSensor.set(3, True)
IRSensor.set(4, True)
IRSensor.set(5, True)
while True:
t = time.time()
for i in range(1, 6):
print(f"{i}: {IRSensor.read(i)}")
print("")
time.sleep(0.2)
Motor Test
-------------------------
.. code-block:: python
from compLib.Motor import Motor
from compLib.Encoder import Encoder
import time
Motor.power(1, -50)
Motor.power(4, 50)
while True:
print(f"L:{Encoder.read(4)} R:{Encoder.read(1)}")
time.sleep(0.1)
Servo Test
-------------------------
.. code-block:: python
from compLib.Servo import Servo
import time
for i in range(1, 8 + 1):
Servo.set_position(i, 45)
print(f"{i}")
time.sleep(1)
Servo.setup_position()
time.sleep(10)
Vision Test
-------------------------
.. code-block:: python
import cv2
from compLib import Vision
from compLib.Servo import Servo
while True:
frame = Vision.Streaming.get_frame()
Vision.Streaming.publish_frame(frame)
Servo.set_position(3, -45)

View file

@ -1,11 +0,0 @@
.. _lib_robot:
Robot
******
Class Documentation
====================
.. autoclass:: compLib.Robot.Robot
:members:
:private-members:

View file

@ -1,7 +0,0 @@
.. _lib_servo:
Servo
******
.. autoclass:: compLib.Servo.Servo
:members:

View file

@ -1,104 +0,0 @@
.. _lib_vision:
Vision
*******
This module provides an interface for grabbing an rtmp stream and using the images to do some processing in opencv.
How do I use this module?
1. Get frames from the raspberry pi camera
2. -- here comes your own processing --
3. Publish the processed frames on an http server
4. You can view the http stream of your processed images in a web browser
Opencv Stream
==============
Because of the rtmp stream needing to buffer some frames and waiting for P-Frames, importing this module might take up
to 5 Seconds.
.. autoclass:: compLib.Vision.__Streaming
:members:
Examples
=========
Using the Vision Module
-------------------------
.. code-block:: python
import cv2
from compLib import Vision
while True:
# get newest opencv frame from camera
frame = Vision.Streaming.get_frame()
# do some processing with the frame.....
# publish frame to streaming server
Vision.Streaming.publish_frame(frame)
Connect the raspberry pi to your internet and view the stream at: "http://your_raspi_ip:9898/". This should display
your raspberry pi camera. Note: the stream will lag a little bit BUT the processing of the image will be done in
realtime.
The output on the website should show whatever your raspberry pi cam records:
.. image:: images/opencv_http_stream.png
:width: 680
:alt: Processed frames from opencv
Chessboard Detection
------------------------------------------
In this example we process the captured stream of images and want to detect chessboards. Run this example and
point your raspberry pi camera to a chessboard and it should be detected.
For testing you can point it at this image:
.. image:: images/chessboard.jpg
:width: 680
:alt: Chessboard for opencv processing
.. code-block:: python
import cv2
from compLib import Vision
while True:
# get newest opencv frame from camera
frame = Vision.Streaming.get_frame()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# convert image to grayscale image
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (6, 9), None)
# draw detected chessboard position onto the image
cv2.drawChessboardCorners(frame, (6, 9), corners, ret)
# publish frame to streaming server
Vision.Streaming.publish_frame(frame)
Connect the raspberry pi to your internet and view the stream at: "http://your_raspi_ip:9898/".
The output image should look like this:
.. image:: images/chessboard_detected.jpg
:width: 680
:alt: Processed frames from opencv
Here is a screenshot of the stream website while viewing the chessboard in this documentation.
.. image:: images/opencv_processed.png
:width: 680
:alt: Processed frames from opencv

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 391 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 356 KiB

View file

@ -1,29 +0,0 @@
.. _other_usage:
Usage
######
.. code-block:: python
import time
from compLib.Motor import *
def forward():
Motor.power(1, -30);
Motor.power(2, 30);
def backward():
Motor.power(1, 30);
Motor.power(2, -30);
def main():
print("hallo ich bin ein roboter beep buup")
forward()
time.sleep(1)
backward()
time.sleep(1)
if __name__ == '__main__':
main()