We have recently proposed the deep learning wavefront sensor, capable of directly estimating Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. However, deep neural networks demand an intensive training stage, where more training examples allow to improve the accuracy and increase the amount of the estimated Zernike modes. Since low order aberrations such as tip and tilt only produce space-invariant motion of the PSF, we propose to treat tip and tilt estimation separately when training the deep learning wavefront sensor, decreasing the training efforts while keeping the wavefront sensor performance. In this paper, we also introduce and test simpler architectures for deep learning wavefront sensing, while exploring the impact of reducing the number of pixels to estimate a given amount of Zernike coefficients. Our preliminary results indicate that we can achieve a significant prediction speedup aiming for real time adaptive optics systems.