Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with pose being the only variable, the facial images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background, facial expression, and illumination. This problem may be alleviated by incorporating the pose angle information of training samples into the manifold learning process. In this paper, we propose a supervised neighborhood-based linear feature transformation algorithm, which is a variant of Fisher Discriminant Analysis (FDA), to constrain the projection computation of manifold learning. The experimental results show that our algorithm improves the accuracy and robustness of head pose estimation.