dnorman: kinect*

8 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. Web page is a couple of years old and no updates, but the project claims to combine pointclouds from multiple Kinect sensors (each with its own dedicated PC controlling it) into a single 3D recording.
    http://brekel.com/multi-sensor
    Tags: , , by dnorman (2017-06-24)
  2. -
    http://sdk.rethinkrobotics.com/wiki/Kinect_basics
    Tags: , , by dnorman (2017-01-20)
  3. This video shows my MSc Robotics, Baxter teleportation project, at Plymouth University. A Geomagic Haptic Feedback Controller, is used to control the position of the robot's right end effector; and also provides force feedback to the operator, so that they can sense when the end effector is near an object, or feel the firmness/consistency of an object in the end effector's grippers, from the output of an FSR (Force Sensing Resistor) mounted on the inside of one gripper pincer; via the force feedback controller. A Kinect v2 gives the operator visual feedback of the robot's workspace; this position of which can be controlled via an Oculus Rift DK2 VR headset (from the position of the headset's builtin-in IMU). Kinect v2 colour images are displayed on the headset's screen to give the operator an immersive experience when controlling the robot arm; the operator also has the option to switch views from the Kinect v2 output, to the output of a camera built into Baxter's cuff (above the end effector). The operator should be able to control the robot arm as naturally as using their own arm to interact with objects within an environment; this project aim to achieve this goal.
    https://www.youtube.com/watch?v=X0PLx7iVjME
    Tags: , , , , by dnorman (2016-12-23)
  4. Python code to teleoperate the Baxter industrial robot using Kinect, Oculus Rift, and a web interface.
    https://github.com/ptsteadman/baxter-teleoperation
    Tags: , , , , by dnorman (2016-12-23)
  5. -
    http://perceptiveio.com/publications/...ormance-capture-of-challenging-scenes
  6. KinectFusion provides 3D object scanning and model creation using a Kinect for Windows sensor. The user can paint a scene with the Kinect camera and simultaneously see, and interact with, a detailed 3D model of the scene. Kinect Fusion can be run at interactive rates on supported GPUs, and can run at non-interactive rates on a variety of hardware. Running at non-interactive rates may allow larger volume reconstructions.
    https://msdn.microsoft.com/en-us/libr...8670.aspx?f=255&MSPPError=-2147217396
    Tags: , by dnorman (2016-11-06)
  7. How would it work, with a Kinect on a collaboration cart observing interactions. Documenting interactions and engagement using skeletal mapping and emotion inference?
    http://www.shacknews.com/article/7930...inect-reading-emotions-and-heart-rate
  8. Could something like this be used to generate documentation of interactions in a F2F class for realtime or post-hoc analysis?

    (GP: I'm thinking of something to help instructors and students visualize classroom interactions in a similar way to what we have with online discourse and interaction analysis - something like Gephi or nodeXL for F2F classes…)
    https://campustechnology.com/articles...recognition-in-the-classroom.aspx?m=2

Top of the page

First / Previous / Next / Last / Page 1 of 1 Linky linky...: Tags: kinect

About - Propulsed by SemanticScuttle