LQRController

A Linear Quadratic Regulator (LQR) for controlling a system modeled by state-space equations.

LQR is a form of optimal control that finds the best control input to apply to a system by minimizing a quadratic cost function. The cost function balances two competing goals:

  1. State Error: How far the system is from its desired target state (penalized by the Q matrix).

  2. Control Effort: How much energy or effort is used to control the system (penalized by the R matrix).

The controller computes the optimal control input u using a simple state-feedback law: u = -Kx, where x is the system's state error and K is the optimal gain matrix.

This should be used by holonomic robots only.

Thank you to Tyler Veness and WPILib!

See also

Constructors

Link copied to clipboard
constructor(qX: Double, qY: Double, qHeading: Double, qVx: Double, qVy: Double, qOmega: Double, rAx: Double, rAy: Double, rAlpha: Double, dt: Double = 0.0303)

Constructor for the common holonomic robot control use case.

constructor(A: Matrix, B: Matrix, Q: Matrix, R: Matrix, dt: Double = 0.0303)

General-purpose constructor for any linear time-invariant (LTI) system.

Functions

Link copied to clipboard
open override fun compute(targetPose: Pose2dDual<Time>, actualPose: Pose2d, actualVelActual: PoseVelocity2d): PoseVelocity2dDual<Time>

Computes the velocity and acceleration command. The frame Target is the reference robot, and the frame Actual is the measured, physical robot.

Link copied to clipboard

Calculates the optimal control input to correct for the given state error.

fun update(error: Matrix): Matrix

Calculates the optimal control input (accelerations) to correct for the given state error.