Eigendecomposition is the factorisation of a matrix into its canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. A common step is the reduction of the data to a kernel matrix, also known as a Gram matrix which is used for machine learning tasks. A significant drawback of kernel methods is the computational complexity associated with manipulating kernel matrices. This paper demonstrates that leading eigenvectors derived from Singular Value Decomposition (SVD) and Nyström approximation methods can be utilised for classification tasks without the need to construct Gram matrices. Experiments were conducted with 14 biomedical datasets to compare classifier performance when taking as input into a classifier matrices containing: 1) leading eigenvectors which result from each approximation method; and 2) matrices which result from constructing the patient-by-patient Gram matrix. The results provide evidence to support the main hypothesis of this paper, that using the leading eigenvectors as input into a classifier significantly (p < 0:05) improves classifier performance in terms of accuracy and time compared to using Gram matrices. Furthermore, experiments were carried out using large multi-modal mHealth time series datasets of ten different subjects with diverse profiles while they were performing several physical activities. Experiments with the mHealth datasets utilised a Sequential Deep Learning model. The significance of the proposed approach is that it can make feature extraction methods more accessible on large-scale unimodal and multi-modal data which are becoming common in many applications.