0%

1. 1 视觉里程计:ICP

前端也称为视觉里程计(VO)。它根据相邻图像的信息,估计出粗略的相机运动,给后端提供较好的 初始值

1.1. 1.1 图像特征点

有代表性的点被称为

  • 经典SLAM中:路标
  • 视觉SLAM中:图像特征(features

特征点由两部分组成:

  • 关键点:特征点在图像里的位置、朝向、大小
  • 描述子:通常是一个向量,描述周围像素的信息

常见图像特征点:

  • SIFT(Scale-Invariant Feature Transform):运算量大
  • FAST:计算快
  • ORB(Oriented FAST and Rotated BRIEF):折中
  • 关键点为 FAST:如果一个像素与它领域的像素差别较大,则更可能是角点
  • 描述子为 BRIEF:二进制描述子,关键点附近两个像素的大小关系

1.2. 1.2 对极几何:2D-2D

Read more »

1. 1 前言

全书分为两部分

  1. 数学基础,涉及到SLAM概述、矩阵知识、李群知识、非线性优化
  2. SLAM技术,实践视觉里程计(特征点法、直接法),后端优化,回环检测

2. 2 SLAM概述

2.1. 2.1 SLAM框架各模块及任务

  • VO:视觉里程计,和计算机视觉研究领域相关。根据两张图片确定旋转了多少度、平移了多少厘米。通过VO计算两帧之间的移动,再累加。问题是会有累计漂移,需要后端优化和回环检测
  • 后端优化:降噪,估计整个系统的状态。主要是滤波和非线性优化算法,优化轨迹
  • 回环检测:解决累计漂移问题,让机器人具有识别曾经到达过的场景的能力
  • 建图,现在常用的是度量地图,拓扑地图有待研究
  • 稀疏:仅表达重要物体,即路标,对定位来说,稀疏地图就够了
  • 稠密:建模所有东西,用于导航ls

2.2. 2.2 编程环境

Ubuntu14.04

CMake

Read more »

1. 1 Introduction

Q-Learning: method based on Q value

  • Every action in a specific state will have a Q value $Q(s,a)$
  • For example, if someone in state $s_1$ have 2 optional action $a_1$ and $a_2$, and $Q(s_1,a_1)>Q(s_1,a_2)$, then this agent will do $a_1$ rather than $a_2$

Main idea:

  1. Start with a bad Q-table that will guide our action
  2. Do action based on Q-table
  3. Calculate estimated Q-value before action and real Q-value after action
  4. Update the Q-table based the error between esti-Q-val and real-Q-val, make the Q-table better for action

2. 2 Simple Game

2.1. Q-table

index matches states, columns matches actions

start with 0

2.2. Choose action

Read more »

1. 1. 目标

  • 问题描述
  • 数据分析
  • 中间结果分析
  • 结果讨论
  • 缺点认识

2. 2. 问题描述及数据定义

2.1. 2.1 问题描述

利用KNN算法对iris数据集进行分类,并用训练集的训练结果对测试集进行预测,观察预测效果。

因为数据集是有标签的,所以这是一个典型的有监督分类问题

2.2. 2.2 数据定义

数据来源:UCI

iris数据集中共有3个种类的花,每种花各有50个样本

  • 其中一个样本与另外两个样本线性可分
  • 其中另一个样本与另外两个样本线性不可分
Read more »

1. 1. 目标

  • 选择一个侧重工业领域的数据集
  • 分析数据
  • 分析问题,分析结果

2. 2. 问题分析

2.1. 2.1 问题

A complex modern semi-conductor manufacturing process is normally under consistent surveillance via the monitoring of signals/variables collected from sensors and or process measurement points. However, not all of these signals are equally valuable in a specific monitoring system. The measured signals contain a combination of useful information, irrelevant information as well as noise. It is often the case that useful information is buried in the latter two. Engineers typically have a much larger number of signals than are actually required. If we consider each type of signal as a feature, then feature selection may be applied to identify the most relevant signals. The Process Engineers may then use these signals to determine key factors contributing to yield excursions downstream in the process. This will enable an increase in process throughput, decreased time to learning and reduce the per unit production costs.

简单来说,在半导体加工行业中,我们希望通过流水线上各个传感器的信号来预先判断最后产品合格与否。

在过去,一般会采用多元统计的分析方法来进行预测,但是现在的生产实际过程中有相当多的传感器收集了过多的信息,因此并不能使用原来的方法。另外,这些信息的重要程度不同,有一些还参杂了许多噪声,因此我们要进行特征提取,选择合适的模型进行分析

数据来源: SECOM Data Set

2.2. 2.2 数据格式定义

数据来自半导体加工工厂,数据来自半导体生产线上的传感器,整体数据已经经过了脱敏的处理,因此无法得知每个属性数据来自哪种,哪一个传感器。数据按时间点记录,1567条数据代表着1567个时间点上生产线上传感器所记录的数据。

Read more »

1. 1 What is Transfer Learning

Transfer learning is a machine learning technique where a model trained on one task is re-purposed on a second related task.

For example, if I already have a fine trained model for detecting dog and cat, and now I want to train a model can detect different kinds of dogs, I don’t need to train the model from scratch. Just use the pre-trained model and train the last few layers’ neural.

2. 2 How to use Transfer Learning

Two common approaches:

  • Develop Model : If you have large dataset on a similar problem and willing to train the model yourself.
  • Pre-trained Model : If you don’t have enough data to train your model so you can download some pre-trained model released by some research institutions.

3. 3 When to use Transfer Learning

transfer-learning

Read more »

1. 1 Introduction

Spinning Up brought by OpenAI is a good resource for learning reinforcement learning.

The goal of Spinning up is to ensure AI is safe developed and help people to learn Deep RL which has a pretty high barrier to entry

In a nutshell, RL is the study of agents and how they learn by trial and error. It formalizes the idea that rewarding or punishing an agent for its behavior makes it more likely to repeat or forego that behavior in the future.

Learning Types:

  • supervised:given x and y, find $f()$
  • unsupervised:given x, find $cluster$
  • reinforcement:given x and z, find decision $f()$ to generate y
  • difference from supervised learning: delayed reward that is your movement will affect later world. about time and sequence

Markov decision process (MDP) :

  • states: things described the world
  • actions: things you can do
  • model: rules, physics of the world
  • reward: a scaler value to value the state
  • policy: what we want to learn, $\pi^{\star}(state)\to action$ ,the action can maximize the cumulative reward.
  • when moving in the grid map, agent should avoid $-negative$ reward and get $+positive$ reward
  • You need to set the reward carefully
  • Even if you are in the same state, your action will be different for different step(time) you can take later. E.g. if you have only 3 steps to go, you may take action risky, $\pi^{\star}(state,time)\to action$

2. 2 Key Concepts

2.1. 2.1 What can RL do

Read more »

1. 1 Introduction

This project was brought to make a flight controller from scratch.

I want to learn from the process so the flight controller will be more general. However the model of the flight and selection of controller platform(STM32/Arduino/Linux ) will make the general task more difficult.

Before the program, I have no experience in flight controller programming. So I would learn from other project shared on Github.com.

2. 2 Learning from Others

2.1. 2.1 HackFlight

As a education oriented project, Hackflight is simple, platform-independent, header-only C++ firmware for multirotor flight controllers and simulators

2.1.1. 2.1.1 Unit

First of all, HackFlight defined some standard units to write simpler code.

  • Distance $ m/s $
  • Time $ s $
  • Euler angle $ radians $
  • Stick demand interval $ [-1,1 ] $
  • Motor demands $ [0,1] $
  • Quaternions interval $ [-1,1] $
Read more »

1. 1 Introduction

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables.

$$
\left{\begin{array}{l}{y=\beta_{0}+\beta_{1} x+\varepsilon} \ {E \varepsilon=0, D \varepsilon=\sigma^{2}}\end{array}\right.
$$

2. 2 the Develop of Algorithm

2.1. Problem set

$$
\mathbf{X}=\left[ \begin{array}{cccc}{x_{11}} & {x_{12}} & { . .} & {x_{1 m}} \ {x_{21}} & {x_{22}} & {\dots} & {x_{2 m}} \ {\ldots} & {\cdots} & {\cdots} & {\cdots} \ {x_{n 1}} & {x_{n 2}} & {\dots} & {x_{n m}}\end{array}\right]
$$

$$
Y=\left(y_{1} \quad y_{2}\right)=\left[ \begin{array}{cc}{y_{11}} & {y_{12}} \ {y_{12}} & {y_{22}} \ {\cdots} & {\cdots} \ {y_{1 n}} & {y_{2 n}}\end{array}\right]
$$

$$
B=\left(b_{1} \quad b_{2}\right)=\left[ \begin{array}{cc}{b_{11}} & {b_{21}} \ {b_{12}} & {b_{22}} \ {\dots} & {\dots} \ {b_{1 m}} & {b_{2 m}}\end{array}\right]
$$

$$
E=\left(e_{1} \quad e_{2}\right)=\left[ \begin{array}{cc}{e_{11}} & {e_{21}} \ {e_{12}} & {e_{22}} \ {\cdots} & {\cdots} \ {e_{1 n}} & {e_{2 n}}\end{array}\right]
$$

$$
\boldsymbol{Y}=\boldsymbol{X} \boldsymbol{B}+\boldsymbol{E} ; \quad y_{1}=\boldsymbol{X} \boldsymbol{b}{1} ; \quad y{2}=\boldsymbol{X} \boldsymbol{b}_{2}
$$

Read more »

1. 1 Introduction

Upgrade version of OMNIBUS F4, the OMNIBUS F4 Pro (Some shop call it as OMNIBUS F4 PRO V2) added SD card supports, has 5v3A BEC, LC filter for Camera and VTX, build in Current sensor for high Integration Frame.

omnibusf4

2. 2 Specifications

  • Processor
  • STM32F405 ARM
  • Sensors
  • InvenSense MPU6000 IMU (accel, gyro)
  • BMP280 barometer
  • Voltage and current sensor
  • Interfaces
  • UARTS
  • PWM outputs
  • RC input PWM/PPM, SBUS
  • I2C port for external compass
  • USB port
  • Built-in OSD
Read more »