||Modern machine learning techniques have gained strong importance by providing powerful learning tools for problems, which are beyond the capabilities of humans. Statistical learning theory combines the theoretical power of mathematics with the pragmatism of computer science. This powerful combination led to the support vector machines (SVMs), one of the most successful learning methods of current days. SVMs quickly became the state-of-the-art technique in several application areas. Although we have the tools to achieve great accuracy with algorithms that can efficiently apply the acquired knowledge, with the new problems we are facing new challenges. Speed is one of the most limiting properties of ML techniques. Graphical processing units (GPUs) are currently among the most powerful single-chip processors, substantially surpassing their CPU counterpart. They not only excel as a graphics engine, but with the recent developments have turned into highly parallel, fully programmable general purpose computing tools. GPUs have attracted the interest of the computing research community. The main goal is harnessing the capabilities of the graphics hardware for general computation. This effort is known as general purpose computing on graphics processing units (GPGPU). GPU computing is the latest path toward an affordable high performance computing. Motivated by the possibilities of GPGPU, we developed a concept for parallelizing the SVM classification. Our goal was, to offer a solution for the performance needs of SVMs. As a proof of concept we implemented a GPU-based SVM classifier using the NVIDIA CUDA framework. To evaluate the implementation, we provided a benchmark application which compares our classifier with the established CPU-based LIBSVM library. Our results show, that important speedup is achievable. Moreover, despite of the single precision GPU arithmetics, the classification accuracy is preserved. Based on our results, we drew conclusions regarding further possibilities offered by GPGPU to the SVM classiifiers. On the one hand, the arithmetic performance and special features of GPUs increase the applicability of SVM classification. At the same time, it is possible to increase classifier accuracy with preserved, or even increased speed by using more complex models that better fit specific problems. On the other hand, we recognized, that small input data sizes do not offer enough concurrency for efficient use of the GPU. We suggest possible solution for these issues.