As virtual reality overlays the user’s view, challenges arise when interaction with their physical surroundings is still needed. In a seated workspace environment, interaction with the physical surroundings can be essential to enable productive working. Interaction with, e.g., physical mouse and keyboard can be difficult when no visual reference is given to where they are placed. This demo shows a combination of computer vision-based marker detection with machine-learning-based hand detection to bring users' hands and arbitrary objects into VR.