To receive notifications about scheduled maintenance, please subscribe to the mailing-list gitlab-operations@sympa.ethz.ch. You can subscribe to the mailing-list at https://sympa.ethz.ch

Commit a8c5bd73 authored by flfuchs's avatar flfuchs
Browse files

Initial commit

parents
# Created by .ignore support plugin (hsz.mobi)
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
.__opbcache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
### Example user template template
### Example user template
# IntelliJ project files
.idea
*.iml
out
gen
### TortoiseGit template
# Project-level settings
/.tgitconfig
### JetBrains template
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
# Gradle
.idea/**/gradle.xml
.idea/**/libraries
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
cmake-build-*/
# Mongo Explorer plugin
.idea/**/mongoSettings.xml
# File-based project format
*.iws
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
# Editor-based Rest Client
.idea/httpRequests
# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
MIT License
Copyright (c) 2020 wsh122333
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# Multi-type_vehicles_flow_statistics
According to YOLOv3 and SORT algorithms, counting multi-type vehicles. Implemented by Pytorch.
Detecting and tracking the vehicles in \["bicycle","bus","car","motorbike","truck"].
## Reference
- yolov3-darknet https://github.com/pjreddie/darknet
- yolov3-pytorch https://github.com/eriklindernoren/PyTorch-YOLOv3
- sort https://github.com/abewley/sort
## Dependencies
- ubuntu/windows
- cuda>=10.0
- python>=3.6
- `pip3 install -r requirements.txt`
## Usage
1. Download the pre-trained yolov3 weight file [here](https://pjreddie.com/media/files/yolov3.weights) and put it into `weights` directory;
2. Run `python3 app.py` ;
3. Select video and double click the image to select area, and then start;
4. After detecting and tracking, the result video and file are saved under `results` directory, the line of `results.txt` with format \[videoName,id,objectName] for each vehicle.
## Demo
![avatar](https://github.com/wsh122333/Multi-type_vehicles_flow_statistics/raw/master/asserts/demo1.gif)
![avatar](https://github.com/wsh122333/Multi-type_vehicles_flow_statistics/raw/master/asserts/demo2.gif)
![avatar](https://github.com/wsh122333/Multi-type_vehicles_flow_statistics/raw/master/asserts/demo3.gif)
import copy
import sys
import cv2
import numpy as np
import torch
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtWidgets import QApplication, QMainWindow, QFileDialog
from config import names
from counter import CounterThread
from magic_constants import UPDATE_DISPLAY_AT_EVERY_FRAME
from gui import UiMainWindow
from models import Darknet
from utils.parse_config import parse_data_config
from utils.sort import KalmanBoxTracker
from utils.utils import load_classes
class App(QMainWindow, UiMainWindow):
def __init__(self):
super(App, self).__init__()
self.setupUi(self)
self.label_image_size = (
self.label_image.geometry().width(),
self.label_image.geometry().height(),
)
self._video_name = None
self.exampleImage = None
self.imgScale = None
self.get_points_flag = 0
self.countArea = []
self.road_code = None
self.time_code = None
self.show_label = names
# button function
self.pushButton_selectArea.clicked.connect(self.select_area)
self.pushButton_openVideo.clicked.connect(self.open_video)
self.pushButton_start.clicked.connect(self.start_count)
self.pushButton_pause.clicked.connect(self.pause)
self.label_image.mouseDoubleClickEvent = self.get_points
self.pushButton_selectArea.setEnabled(False)
self.pushButton_start.setEnabled(False)
self.pushButton_pause.setEnabled(False)
# some flags
self.running_flag = 0
self.pause_flag = 0
self.counter_thread_start_flag = 0
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
data_config = "config/coco.data"
weights_path = "weights/yolov3.weights"
model_def = "config/yolov3.cfg"
data_config = parse_data_config(data_config)
self.yolo_class_names = load_classes(data_config["names"])
# Initiate model
print("Loading model ...")
self.yolo_model = Darknet(model_def).to(self.device)
if weights_path.endswith(".weights"):
# Load darknet weights
self.yolo_model.load_darknet_weights(weights_path)
else:
# Load checkpoint weights
self.yolo_model.load_state_dict(torch.load(weights_path))
# counter Thread
self.counterThread = CounterThread(self.yolo_model, self.yolo_class_names, self.device)
self.counterThread.sin_counterResult.connect(self.show_image_label)
self.counterThread.sin_done.connect(self.done)
self.counterThread.sin_counter_results.connect(self.update_counter_results)
def open_video(self):
openfile_name = QFileDialog.getOpenFileName(self, "Open video", "", "Video files(*.avi , *.mp4)")
self._video_name = openfile_name[0]
vid = cv2.VideoCapture(openfile_name[0])
while vid.isOpened():
vid.set(cv2.CAP_PROP_POS_FRAMES, int(vid.get(cv2.CAP_PROP_FRAME_COUNT) * 0.5))
ret, frame = vid.read()
if ret:
self.exampleImage = frame
self.show_image_label(frame)
self.imgScale = np.array(frame.shape[:2]) / [
self.label_image_size[1],
self.label_image_size[0],
]
vid.release()
break
self.pushButton_selectArea.setEnabled(True)
self.pushButton_start.setText("Start")
self.pushButton_start.setEnabled(False)
self.pushButton_pause.setText("Pause")
self.pushButton_pause.setEnabled(False)
# clear counting results
KalmanBoxTracker.count = 0
self.label_sum.setText("0")
self.label_sum.repaint()
def get_points(self, event):
if self.get_points_flag:
x = event.x()
y = event.y()
self.countArea.append([int(x * self.imgScale[1]), int(y * self.imgScale[0])])
exampleImageWithArea = copy.deepcopy(self.exampleImage)
for point in self.countArea:
exampleImageWithArea[point[1] - 10 : point[1] + 10, point[0] - 10 : point[0] + 10] = (0, 255, 255)
cv2.fillConvexPoly(exampleImageWithArea, np.array(self.countArea), (0, 0, 255))
self.show_image_label(exampleImageWithArea)
print(self.countArea)
def select_area(self):
# change Area needs update exampleImage
if self.counter_thread_start_flag:
self.videoCapture.set(
cv2.CAP_PROP_POS_FRAMES,
int(self.videoCapture.get(cv2.CAP_PROP_FRAME_COUNT) * 0.5),
)
ret, frame = self.videoCapture.read()
if ret:
self.exampleImage = frame
self.show_image_label(frame)
if not self.get_points_flag:
self.pushButton_selectArea.setText("Submit Area")
self.get_points_flag = 1
self.countArea = []
self.pushButton_openVideo.setEnabled(False)
self.pushButton_start.setEnabled(False)
else:
self.pushButton_selectArea.setText("Select Area")
self.get_points_flag = 0
exampleImage = copy.deepcopy(self.exampleImage)
# painting area
for i in range(len(self.countArea)):
cv2.line(
exampleImage,
tuple(self.countArea[i]),
tuple(self.countArea[(i + 1) % (len(self.countArea))]),
(0, 0, 255),
2,
)
self.show_image_label(exampleImage)
# enable start button
self.pushButton_openVideo.setEnabled(True)
self.pushButton_start.setEnabled(True)
def show_image_label(self, img_np):
if not UPDATE_DISPLAY_AT_EVERY_FRAME and self.counter_thread_start_flag:
return
img_np = cv2.cvtColor(img_np, cv2.COLOR_BGR2RGB)
img_np = cv2.resize(img_np, self.label_image_size)
frame = QImage(
img_np,
self.label_image_size[0],
self.label_image_size[1],
QImage.Format_RGB888,
)
pix = QPixmap.fromImage(frame)
self.label_image.setPixmap(pix)
self.label_image.repaint()
def start_count(self):
if self.running_flag == 0:
# clear count and display
KalmanBoxTracker.count = 0
for item in self.show_label:
vars(self)[f"label_{item}"].setText("0")
# clear result file
with open(f"{self._video_name}.csv", "w"):
pass
# start
self.running_flag = 1
self.pause_flag = 0
self.pushButton_start.setText("Stop")
self.pushButton_openVideo.setEnabled(False)
self.pushButton_selectArea.setEnabled(False)
# emit new parameter to counter thread
self.counterThread.sin_runningFlag.emit(self.running_flag)
self.counterThread.sin_countArea.emit(self.countArea)
self.counterThread.sin_videoList.emit(self._video_name)
# start counter thread
self.counterThread.start()
self.pushButton_pause.setEnabled(True)
elif self.running_flag == 1: # push pause button
# stop system
self.running_flag = 0
self.counterThread.sin_runningFlag.emit(self.running_flag)
self.pushButton_openVideo.setEnabled(True)
self.pushButton_selectArea.setEnabled(True)
self.pushButton_start.setText("Start")
def done(self, sin):
if sin == 1:
self.pushButton_openVideo.setEnabled(True)
self.pushButton_start.setEnabled(False)
self.pushButton_start.setText("Start")
def update_counter_results(self, counter_results):
with open(self._video_name + ".csv", "a") as f:
for i, result in enumerate(counter_results):
label_var = vars(self)[f"label_{result[1]}"]
label_var.setText(str(int(label_var.text()) + 1))
label_var.repaint()
label_sum_var = vars(self)["label_sum"]
label_sum_var.setText(str(int(label_sum_var.text()) + 1))
label_sum_var.repaint()
f.writelines(";".join(map(lambda x: str(x), result)))
f.write("\n")
def pause(self):
if self.pause_flag == 0:
self.pause_flag = 1
self.pushButton_pause.setText("Continue")
self.pushButton_start.setEnabled(False)
else:
self.pause_flag = 0
self.pushButton_pause.setText("Pause")
self.pushButton_start.setEnabled(True)
self.counterThread.sin_pauseFlag.emit(self.pause_flag)
if __name__ == "__main__":
app = QApplication(sys.argv)
myWin = App()
myWin.show()
sys.exit(app.exec_())
color_dict = {
"bicycle": (179, 52, 255),
"bus": (255, 191, 0),
"car": (127, 255, 0),
"motorbike": (0, 140, 255),
"truck": (0, 215, 255),
}
names = list(color_dict.keys())
classes= 80
train=data/train.txt
valid=data/valid.txt
names=config/coco.names
backup=backup/
person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
[net]
# Testing
batch=1
subdivisions=1
# Training
# batch=64
# subdivisions=2
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 500200