Interview questions & answers
Q1. What is a state space representation and why is it used?
A state space representation describes a dynamic system using the first-order vector differential equations ẋ = Ax + Bu and y = Cx + Du, where x is the state vector, u is the input, and y is the output. A DC motor's dynamics can be written in state space with state variables [armature current, angular velocity], providing a complete internal description that includes non-observable modes invisible to transfer function analysis. State space is essential for MIMO systems, time-varying systems, and modern optimal and robust control design, which transfer functions cannot handle.
Follow-up: What is the relationship between the state space matrices and the transfer function?
Q2. What are state variables and how do you choose them?
State variables are the minimum set of variables that, together with all future inputs, completely determine the future behavior of the system — they represent the system's memory. For an RLC circuit, the natural state variables are the capacitor voltage and inductor current because these are energy-storing elements whose values cannot change instantaneously. While the choice of state variables is not unique, choosing the physical energy variables leads to a well-conditioned state matrix and direct physical interpretation of the system equations.
Follow-up: Is the choice of state variables unique, and does it matter for system analysis?
Q3. What is the eigenvalue of the system matrix A and what does it tell you?
The eigenvalues of A are the roots of det(sI - A) = 0, which are identical to the poles of the transfer function for a fully controllable and observable system, and they determine the natural response modes of the system. An A matrix with eigenvalues at -3 and -5 means the natural response decays as a combination of e^(-3t) and e^(-5t), with time constants of 333 ms and 200 ms respectively. Eigenvalues with positive real parts indicate unstable modes, and complex eigenvalues indicate oscillatory modes — the same stability interpretation as s-plane poles.
Follow-up: What happens if the eigenvalue of A is complex with a positive real part?
Q4. What is controllability and how do you test it?
A system is controllable if there exists a finite-time input that can drive the state from any initial condition to any desired final state; it is tested by forming the controllability matrix Wc = [B, AB, A²B, ...A^(n-1)B] and checking that it has full rank n. For a two-state DC motor model, the 2×2 controllability matrix must have rank 2 — if it is rank 1, one of the motor's modes cannot be influenced by the armature voltage input. Loss of controllability usually means a physical state variable is not coupled to the input, often due to a structural or modeling error.
Follow-up: What is the physical interpretation of an uncontrollable mode?
Q5. What is observability and how do you test it?
A system is observable if the initial state can be determined from the output measurements over a finite time interval; it is tested by forming the observability matrix Wo = [C; CA; CA²; ... CA^(n-1)] and checking that it has full rank n. In a robot joint with a position encoder measuring only angular position, if the velocity state is not observable from position measurements alone, a state observer cannot recover the velocity for feedback. An unobservable mode is one whose dynamics do not appear in the output, often because the output equation C misses that state.
Follow-up: What is the dual relationship between controllability and observability?
Q6. What is pole placement and how is it done in state space?
Pole placement is a method of choosing the full-state feedback gain matrix K such that the eigenvalues of (A - BK) are placed at desired closed-loop locations that satisfy performance specifications like damping ratio and settling time. For a magnetic levitation system with A having an unstable eigenvalue at +5, the feedback gain K is computed using the Ackermann formula or place() in MATLAB to shift that eigenvalue to s = -5, stabilizing the system. The Ackermann formula requires the system to be fully controllable — if any mode is uncontrollable, that eigenvalue cannot be moved by state feedback.
Follow-up: What is the Ackermann formula and when does it fail?
Q7. What is a state observer (Luenberger observer) and why is it needed?
A Luenberger observer is a dynamic system that estimates the full state vector x̂ from the available output y, using the observer equation x̂̇ = Ax̂ + Bu + L(y - Cx̂), where L is the observer gain matrix chosen so that estimation error decays quickly. In a DC motor controller where only position is measured by an encoder, a Luenberger observer estimates the velocity from the position measurements so that full-state velocity feedback can be implemented without a tachometer. Observer poles should be placed 3–5 times faster than the controller poles to ensure the state estimates converge before they are used for control.
Follow-up: How do you choose the observer gain matrix L?
Q8. What is the separation principle in control systems?
The separation principle states that in a linear system, the design of the state feedback controller and the state observer can be done independently — the combined system's closed-loop poles are the union of the controller poles and the observer poles. For a servo motor controller with full state feedback, you can design the K matrix to place controller poles at -5 ± j5 and independently design the L matrix to place observer poles at -20 ± j20, and the combined system will have exactly all four of these poles. This principle only holds for linear systems; for nonlinear systems, combined controller-observer design is generally required.
Follow-up: Does the separation principle hold for nonlinear systems?
Q9. How do you convert a transfer function to state space form?
The controllable canonical form conversion takes a transfer function with denominator a₀sⁿ + a₁s^(n-1) + ... + aₙ and writes the A matrix with -aₙ/a₀ through -a₁/a₀ in the last row and a companion matrix structure, with B having 1 in the last position and C extracting the output. For H(s) = (s + 3)/(s² + 4s + 5), the controllable canonical form gives A = [[0,1],[-5,-4]], B = [[0],[1]], C = [3,1]. Different canonical forms (observable, modal, Jordan) exist and are chosen based on which property is most convenient for the design task.
Follow-up: What is the difference between controllable canonical form and observable canonical form?
Q10. What is the matrix exponential e^(At) and what role does it play in state space?
The matrix exponential e^(At) is the state transition matrix Φ(t) that propagates the state from any initial condition: x(t) = e^(At)·x(0) + ∫e^(A(t-τ))Bu(τ)dτ, the complete solution to the state equation. For a first-order system with A = [-2], e^(At) = e^(-2t), which is simply the familiar scalar exponential decay. Computing the matrix exponential for higher-order systems is typically done using the eigenvalue decomposition A = PΛP⁻¹, giving e^(At) = P·diag(e^(λ₁t),e^(λ₂t),...)·P⁻¹.
Follow-up: How does the eigenvalue decomposition simplify computing the matrix exponential?
Q11. What is a minimal state space realization?
A minimal realization is a state space representation with the smallest possible number of state variables — equal to the degree of the denominator after cancelling common factors — and it is both controllable and observable. If a transfer function has a common factor in numerator and denominator, such as (s+2)/[(s+2)(s+3)], the minimal realization has only one state (the s+3 pole) rather than two, because the s+2 mode is both uncontrollable and unobservable. Non-minimal realizations hide cancelled modes that can still appear as internal instabilities, which is why minimal realizations are always preferred.
Follow-up: What is the danger of using a non-minimal state space realization in a control design?
Q12. How is the discrete-time state space model derived from the continuous-time model?
The discrete-time state space matrices are derived by holding the input constant over each sample period T: Ad = e^(AT) and Bd = A⁻¹(e^(AT) - I)B, converting the continuous differential equation to a difference equation x[k+1] = Ad·x[k] + Bd·u[k]. For a motor control system sampled at T = 1 ms with A = [[0,1],[0,-10]] and B = [[0],[1]], MATLAB's c2d function computes the exact discrete-time matrices for digital implementation on a microcontroller. The sampling rate must satisfy the Nyquist criterion — at least twice the highest frequency in the system — to prevent aliasing in the discrete model.
Follow-up: What is the effect of a very fast sampling rate on the discrete-time A matrix?
Q13. What is the Kalman filter and how does it extend the Luenberger observer?
The Kalman filter is an optimal state observer that minimizes the mean squared estimation error when process noise and measurement noise are present, computing the observer gain L as L = P·Cᵀ·R⁻¹, where P is the error covariance matrix and R is the measurement noise covariance. In an automotive inertial navigation system, the Kalman filter fuses noisy accelerometer and GPS data — characterized by their noise covariance matrices Q and R — to produce an optimal position and velocity estimate far better than either sensor alone. The Luenberger observer is a special case of the Kalman filter where the noise statistics are ignored and L is chosen by pole placement.
Follow-up: What are the two covariance matrices in a Kalman filter and what do they represent?
Q14. What is an LQR (Linear Quadratic Regulator) controller?
LQR is an optimal state feedback control design method that finds the gain matrix K minimizing the cost function J = ∫(xᵀQx + uᵀRu)dt, balancing state error against control effort through the weight matrices Q and R. For a satellite attitude control system, setting Q large penalizes pointing error while setting R large penalizes thruster fuel consumption, allowing the engineer to make the explicit trade-off between accuracy and fuel use through the matrix weights. LQR guarantees at least 60° phase margin and infinite gain margin in the loop it controls, making it inherently more robust than arbitrary pole placement.
Follow-up: What stability guarantees does LQR provide and why?
Q15. What is the difference between state feedback and output feedback?
State feedback uses all n state variables for control: u = -Kx, which gives maximum freedom to place all n closed-loop poles but requires all states to be measured or observed. Output feedback uses only the measured output: u = -Ky, which is simpler to implement but can place at most as many poles as there are outputs, limiting the achievable performance. In a robot with joint torque sensors measuring only position and not velocity, output feedback with a PD controller is the practical implementation, while state feedback with a velocity observer achieves better disturbance rejection.
Follow-up: Under what conditions is output feedback as powerful as full state feedback?
Common misconceptions
Misconception: The eigenvalues of A are always equal to the closed-loop poles of the system.
Correct: The eigenvalues of A are the open-loop poles; the closed-loop poles are the eigenvalues of (A - BK) after applying state feedback gain K.
Misconception: A system that is stable is always both controllable and observable.
Correct: Stability, controllability, and observability are independent properties; a system can be stable but uncontrollable or unobservable if some modes are decoupled from the input or output.
Misconception: Observer poles should be placed at the same locations as controller poles.
Correct: Observer poles should be placed 3–5 times faster (further left in the s-plane) than the controller poles so that state estimation errors decay before affecting control performance.
Misconception: The state space representation is unique for a given system.
Correct: The state space representation is not unique — infinitely many representations exist related by similarity transformations, all equivalent but differing in the choice of state variables.