VRG3D Documentation

Quick Links

VRG3D::VRAppApplication base class.
VRG3D::KnownVRSetupsHolds a list of key names and descriptions of known VR form factors.
VRG3D::DisplayTileDescribes the physical location of the projection wall with respect to the coordinate system of the trackers.
VRG3D::ProjectionVRCameraKnows how to setup the correct projection matrices given a DisplayTile and head position.
VRG3D::EventReference counted Event class, stores a named event plus data of type 1D, 2D, 3D, CoordinateFrame, or string.
VRG3D::EventNetMsgA message for sending events over the network.
VRG3D::EventBufferNetMsgA message for sending an array of events over the network.
VRG3D::InputDeviceBase class for input devices that get polled each frame for new data.
VRG3D::VRPNTrackerDeviceConnects to a VRPN server and reports data on trackers.
VRG3D::VRPNButtonDeviceConnects to a VRPN server and reports data when buttons are pressed and released.
VRG3D::VRPNAnalogDeviceConnects to a VRPN server and reports data for analog devices.
VRG3D::MouseToTrackerSimulates 6DOF tracking data with mouse movement - useful for debugging VR apps on a desktop.
VRG3D::SynchedSystemMaintains a clock that is synchronized across all rendering nodes.
G3DOperators.HUseful routines for strings and G3D types.


Quick Start Guide for Brown Users

  1. Put this in your .cshrc or .tchsrc startup file to setup our $G software framework. For more info on $G, see http://vis.cs.brown.edu/resources/doc/gfxtools-docs/index.html

    foreach d (/cygdrive/c/gfx /cygdrive/d/gfx /map/gfx0 /share/gfx)
      if (-r $d/tools/shared/lib/gfxtools-startup-inc) then
        source $d/tools/shared/lib/gfxtools-startup-inc
        break
      endif
    end
    

  2. Open a new shell so that your .cshrc file gets sourced. Now, follow these steps to create a new project of your own (replace mynewproject with the name of your project):

    1. mkdir ~/mynewproject
    2. cp $G/src/G3D/VRG3D/Makefile.newproject ~/mynewproject/Makefile
    3. cp $G/src/G3D/VRG3D/vrg3d-demo.cpp ~/mynewproject/
    4. cd ~/mynewproject
    5. Add your project to the $G cvs tree with the command gfxprojinit mynewproject . (Remember the period at the end of the command.)
    6. cd $G/src/mynewproject
    7. edit vrg3d-demo.cpp to build your own program by filling in the doUserInput() and doGraphics() routines.
    8. run make to compile the project.
    9. use the command obj/mynewproject-d desktop to run the program in desktop mode.

  3. This gets you started with building your own VRG3D application, to run it in the Cave, also take a look at these Cave specific instructions.

Overview/Goals

VRG3D is a library that sits on top of G3D and implements a minimal set of features required to run applications written with G3D in projection-based VR setups, such as CAVEs and Fishtanks. In particular, this requires: 1. setting up a stereo display with a virtual camera that appropriately updates the projection matrix based on head tracking information each frame, 2. dealing with 6 degree-of-freedom input from trackers and interfacing with devices typically used in VR, and 3. synchronizing the distribution of input events and rendering across rendering clusters typically used in multi-wall or tiled VR setups.

VRG3D provides config files and settings that can be used with serval different VR configurations at Brown, including the CAVE, and at other universities.

How It Works

In keeping with a typical G3D demo application setup, VRG3D applications should subclass from VRG3D::VRApp, which provides two key methods for the application programer to override (doGraphics() and doUserInput()).

doGraphics() is called once per eye for stereo rendering. VRG3D first sets the appropriate draw buffer GL_BACK_LEFT or GL_BACK_RIGHT, then clears the screen, then sets G3D::RenderDevice's prespective projection matrix and camera-to-world matrix, finally it calls doGraphics(). So, your application should not call any of these methods itself. You should also let VRG3D call RenderDevice::beginFrame() and endFrame().

doUserInput() is called with an Array of events generated since the last frame. You can iterate through this list and listen for events of a particular name. See the VRG3D demo for an example.

Connections to typical VR devices, such as trackers, wands, buttons, etc.. are usually handled through interfacing with UNC's VRPN library, which is freely available from the VRPN website: http://www.cs.unc.edu/Research/vrpn VRG3D uses a device configuration file to startup connections to a vrpn server and request events from it. One of the trackers specified in this config file should generate an event named "Head_Tracker". VRG3D will listen for this event and update camera movement based on it.

The coordinates reported by the trackers should be registered with the physical space of the projection screen(s). These coordinates are specified by the VRG3D::DisplayTile stored in VRG3D::VRApp.

Include Files: Put #include <VRG3D.H> in your project to include all of the VRG3D library headers.

Link Line: Link with VRG3D-d.lib (libVRG3D-d.a on linux systems) for the debugging version of the library. Use VRG3D.lib/libVRG3D.a for the optimized version and libVRG3D-p.a for the profiling version.

Example code: The file vrg3d-demo.cpp is a good place to start to see how to use the library.

Running VRG3D Applications in Different VR Setups

When you initialize a new VRApp you either pass it a string identifying a "Known VR Setup" (see VRG3D::KnownVRSetups) or you pass it a bunch of custom settings that describe your own VR setup. The vrg3d-demo takes as its first argument the name of the known setup to start. If you run the demo with a -h argument it will print out the name and description of all the known setups.

Running VRG3D Applications on a Cluster

To run applications in a cluster, you need to start one vrg3d-server to connect to input devices, distribute events to the rendering clients, and synchronize the rendering and you need to start one instance of a VRApp on each of the rendering nodes. The server can be run as a command-line program or it can open up a graphics window. In the window mode, it will pass any keyboard and mouse events generated in the window on to the clients. The window mode is the default. To run in command line mode start the server with -nogfx as the first argument.

Specific Instructions for Brown's CAVE

Prereqs:
  1. You must be able to ssh to cs-front and ssh from cs-front to any of the other cs-* machines.
  2. You must have $G setup in your account.

To run without keyboard and mouse input:

  1. ssh cs-front
  2. cd to your program directory.
  3. Run: $G/bin/vrg3d-runcave your-program-name
  4. To quit, press Ctrl-C in the xterm in which you started the program.

To run with keyboard and mouse input from the windows machine "depthcube" This is currently the machine that the wireless mouse is plugged into, so use this if you want to get button press events from that mouse device.

  1. Make sure the cavedemo user is logged in on "depthcube". (A program called grexecd should be running minimized whenever cavedemo is logged in.)
  2. From your linux machine: ssh cs-front
  3. cd to your program directory.
  4. run $G/bin/vrg3d-runcave-winserv-dc your-program-name
  5. To quit, press ESC in the black square window that pops up on the windows machine.

To run with keyboard and mouse input from the windows machine "audio-cave"

  1. Make sure the cavedemo user is logged in on "audio-cave". (A program called grexecd should be running minimized whenever cavedemo is logged in.)
  2. From your linux machine: ssh cs-front
  3. cd to your program directory.
  4. Run: $G/bin/vrg3d-runcave-winserv your-program-name
  5. To quit, press ESC in the black square window that pops up on the windows machine.

To run any of the vrg3d-runcave* scripts remotely when you cannot pop up an xterm for each wall, run the script with -x as the first argument and it will run ssh processes in the background of the current shell rather than starting a new xterm for each one.

Testing VR Applications on a Desktop Machine

If you tell VRG3D::VRApp to start with a "desktop" VR setup, then it will just open a normal graphics window that is useful for testing your code away from VR. The VRG3D::MouseToTracker class is very helpful in approximating VR tracker input on a desktop. Look at the demo application to see how it is used. As you move the mouse around it generates CoordinateFrame events named "Mouse1_Tracker". If you hold down the SHIFT key while moving the mouse up and down, you can move the simulated tracker in the Z direction. If you hold down the X, Y, or Z key and then move the mouse from side to side, then you will rotate the tracker around that axis. To simulate more than one tracker, you can tap the TAB key, now the mouse generates events called "Mouse2_Tracker". Setup your application to respond to switch whether it responds to Mouse1_Tracker, Wand_Tracker, or some other name event depending on whether you start it up in a desktop configuration, in the Cave, etc..

Considerations in Moving from Desktop to VR

Coordinate System Conventions

RoomSpace: VRG3D is setup to work with projection-based VR where 3D trackers produce input that is relative to a projection screen that is placed in the room. We call the coordinate space, that both the trackers and the projection screen live in, "RoomSpace". The dimensions and position of the VRG3D::DisplayTile should be specified in these coordinates, and all tracking events that come into your program will be in this coordinate system.

VRG3D coordinate system conventions. Typically, +X is to the right, +Y is up, and +Z comes out of the screen. In a Cave setup, we have the same convention, and +Z typically ends up pointing out of the back of the Cave, through the doorway. For desktop, fishtank, and powerwall setups where there is only one screen, the origin of the coordinate system is placed directly in the center of the screen. For Caves and multi-wall setups, the origin is placed floating directly in the center of the Cave. Units are typically reported in feet. This setup means that if you place your models and datasets directly at the origin of the world, you should see them when you start up your program.

Position Objects in the Scene, not the Camera

When writing 3D graphics programs for display on the desktop, we often think about repositioning a virtual camera to get a good view of the model. In VR, we have to change this mentality slightly because head tracking is constantly controlling the position of the virtual camera.

Instead of moving the camera to see the model, move the model so that it lies within a reasonable position in the room so that the viewer can walk around it.

VirtualSpace: In the VRG3D demo, we store a single coordinate frame that specifies the transformation from RoomSpace to a space we call VirtualSpace. You may find it useful to use such a convention in your programs, but VRG3D does not require you to. VirtualSpace and RoomSpace are identical when the demo program starts up, but as the arrow keys are pressed a transformation is applied to the virtual-to-room-space matrix. This has the effect of moving the "camera" around. You can use a transformation like this to do something like "clutching" the virtual object using a button and a tracker and moving it around. See the demo for more ideas.

Using OpenGL calls with G3D

If you want to make straight OpenGL calls from within a G3D application, put them inside this wrapper so that you don't mix up G3D's idea of the current OpenGL state.


renderDevice->pushState();
glPushAttrib(GL_ALL_ATTRIB_BITS);

// Put your opengl calls here

glPopAttrib();
renderDevice->popState();

Using trackdAPI Rather than VRPN for Device Input

Edit all the Makefiles that have a line that starts with DEFINES= to have the string USE_TRACKD at the end of it and recompile the library. Model your device config file after the file "dive-devices.cfg".


Example Demo Application

/*
 * \author Daniel Keefe (dfk)
 *
 * \file  vrg3d-demo.cpp
 *
 */



#include <VRG3D.H>

class MyVRApp : public VRApp
{
public:
  MyVRApp(const std::string &mySetup) : VRApp() {
    // initialize the VRApp
    init(mySetup);

    _mouseToTracker = new MouseToTracker(_camera, 2);

    _virtualToRoomSpace = CoordinateFrame();
  }

  virtual ~MyVRApp() {}

  void doUserInput(Array<VRG3D::EventRef> &events) {
    // MouseToTracker is a really helpful class for testing out VR
    // interactions from the desktop.  This call makes it respond to
    // mouse events and generate new events as if it were a 6DOF
    // tracking device.  We add the new events to the event queue and
    // process them as usual.
    Array<VRG3D::EventRef> newEvents;
    _mouseToTracker->doUserInput(events, newEvents);
    events.append(newEvents);

    for (int i=0;i<events.size();i++) {

      // Save all the tracker events that come in so we can use them in the doGraphics routine
      if (endsWith(events[i]->getName(), "_Tracker")) {
        if (_trackerFrames.containsKey(events[i]->getName())) {
          _trackerFrames[events[i]->getName()] = events[i]->getCoordinateFrameData();
        }
        else {
          _trackerFrames.set(events[i]->getName(), events[i]->getCoordinateFrameData());
        }
      }

      // Respond to events to do some simple navigation
      else if (events[i]->getName() == "kbd_LEFT_down") {
        _virtualToRoomSpace = CoordinateFrame(Matrix3::fromAxisAngle(Vector3(0,1,0), toRadians(5.0))) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_RIGHT_down") {
        _virtualToRoomSpace = CoordinateFrame(Matrix3::fromAxisAngle(Vector3(0,1,0), toRadians(-5.0))) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_UP_down") {
        _virtualToRoomSpace = CoordinateFrame(Matrix3::fromAxisAngle(Vector3(1,0,0), toRadians(5.0))) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_DOWN_down") {
        _virtualToRoomSpace = CoordinateFrame(Matrix3::fromAxisAngle(Vector3(1,0,0), toRadians(-5.0))) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_SHIFT_LEFT_down") {
        _virtualToRoomSpace = CoordinateFrame(Vector3(-0.1,0,0)) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_SHIFT_RIGHT_down") {
        _virtualToRoomSpace = CoordinateFrame(Vector3(0.1,0,0)) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_SHIFT_UP_down") {
        _virtualToRoomSpace = CoordinateFrame(Vector3(0,0.1,0)) * _virtualToRoomSpace;
      }
      else if (events[i]->getName() == "kbd_SHIFT_DOWN_down") {
        _virtualToRoomSpace = CoordinateFrame(Vector3(0,-0.1,0)) * _virtualToRoomSpace;
      }


      // Some printouts for other events, just to show how to access other types of event data
      else if (events[i]->getName() == "kbd_SPACE_down") {
        cout << "Pressed the space key." << endl;
      }
      else if (events[i]->getName() == "Wand_Left_Btn_down") {
        cout << "Wand left btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_Middle_Btn_down") {
        cout << "Wand middle btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_MidLeft_Btn_down") {
        cout << "Wand middle left btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_MidRight_Btn_down") {
        cout << "Wand middle right btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_Right_Btn_down") {
        cout << "Wand right btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_Joystick_Btn_down") {
        cout << "Wand joystick btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_Trigger_Btn_down") {
        cout << "Wand trigger btn pressed." << endl;
      }
      else if (events[i]->getName() == "Wand_Joystick_X") {
        cout << "Wand Joystick X = " << events[i]->get1DData() << endl;
      }
      else if (events[i]->getName() == "Wand_Joystick_Y") {
        cout << "Wand Joystick Y = " << events[i]->get1DData() << endl;
      }
      else if (events[i]->getName() == "Mouse_Pointer") {
        static Vector2 lastPos;
        if (events[i]->get2DData() != lastPos) {
          cout << "New mouse position = " << events[i]->get2DData() << endl;
          lastPos = events[i]->get2DData();
        }
      }
      else if (events[i]->getName() == "Mouse_Left_Btn_down") {
        cout << "Mouse left btn pressed at position " << events[i]->get2DData() << endl;
      }
      else if (beginsWith(events[i]->getName(), "kbd_")) {
        cout << "Keyboard event: " << events[i]->getName() << endl;
      }
      else {
        // This will print out the names of all events, but can be too
        // much if you are getting several tracker updates per frame.
        // Uncomment this to see everything..
        cout << events[i]->getName() << endl;
      }
    }
  }

  void doGraphics(RenderDevice *rd) {
    
    // Load a font for the fps display, findVRG3DDataFile looks first in the current directory
    // and then in $G/lib/VRG3D/
    if (_font.isNull()) {
      std::string fontfile = VRApp::findVRG3DDataFile("eurostyle.fnt");
      if (fileExists(fontfile)) {
        _font = GFont::fromFile(fontfile);
      }
    }

    // Draw labeled axes for all the Tracker events we have received, except, skip the
    // Head_Tracker because drawing axes right on top of the eyes would block our view
    // of everything else.
    double axesSize = 0.15;
    Array<std::string> trackerNames = _trackerFrames.getKeys();
    for (int i=0;i<trackerNames.size();i++) {
      CoordinateFrame trackerFrame = _trackerFrames[trackerNames[i]];
      if (trackerNames[i] != "Head_Tracker") {
        Draw::axes(trackerFrame, rd, Color3::red(), Color3::green(), Color3::blue(), axesSize);
        if (_font.notNull()) {
          rd->pushState();
          rd->disableLighting();
          CoordinateFrame textframe = trackerFrame * CoordinateFrame(Vector3(1.1*axesSize, 0, 0));
          _font->draw3D(rd, trackerNames[i], textframe, 0.25*axesSize, Color3::white());
          rd->popState();
        }
      }
      if (_font.notNull()) {
        // This draws the position of the tracker on the screen
        rd->push2D();
        rd->disableLighting();
        std::string s = format("%s: %.2f, %.2f, %.2f", trackerNames[i].c_str(),
                               trackerFrame.translation[0], 
                               trackerFrame.translation[1], 
                               trackerFrame.translation[2]);
        _font->draw2D(rd, s, Vector2(25,50 + 25*i), 12, Color3::white());
        rd->pop2D();
      }
    }


    // Drawing the projection of each eye onto the filmplane is often
    // a good way to debug head tracking
    Plane filmplane = Plane(_tile.topLeft, _tile.botLeft, _tile.topRight);
    Vector3 norm = filmplane.normal();

    Vector3 leftEye = _camera->getLeftEyeFrame().translation;
    Ray rl = Ray::fromOriginAndDirection(leftEye, -norm);
    Vector3 leftEyeScreen = rl.intersection(filmplane);
    if (!leftEyeScreen.isFinite()) {
      // projection didn't work, try reversing ray direction
      rl = Ray::fromOriginAndDirection(leftEye, norm);
      leftEyeScreen = rl.intersection(filmplane);
    }
    if (leftEyeScreen.isFinite()) {
      Draw::sphere(Sphere(leftEyeScreen, 0.015), rd, Color3::red(), Color4::clear());
    }

    Vector3 rightEye = _camera->getRightEyeFrame().translation;
    Ray rr = Ray::fromOriginAndDirection(rightEye, -norm);
    Vector3 rightEyeScreen = rr.intersection(filmplane);
    if (!rightEyeScreen.isFinite()) {
      // projection didn't work, try reversing ray direction
      rr = Ray::fromOriginAndDirection(rightEye, norm);
      rightEyeScreen = rr.intersection(filmplane);
    }
    if (rightEyeScreen.isFinite()) {
      Draw::sphere(Sphere(rightEyeScreen, 0.015), rd, Color3::green(), Color4::clear());
    }

    
    // This code draws the frames per second on the screen
    if (_font.notNull()) {
      rd->push2D();
      std::string msg = format("%3d fps", iRound(rd->frameRate()));
      _font->draw2D(rd, msg, Vector2(25,25), 12, Color3(0.61, 0.72, 0.92));
      rd->pop2D();
    }



    // The tracker frames above are drawn with the object to world
    // matrix set to the identity because tracking data comes into the
    // system in the Room Space coordinate system.  Room Space is tied
    // to the dimensions of the room and the projection screen within
    // the room, thus it never changes as your program runs.  However,
    // it is often convenient to move objects around in a virtual
    // space that can change relative to the screen.  For these
    // objects, we put a virtual to room space transform on the OpenGL
    // matrix stack before drawing them, as is done here..
    rd->pushState();
    rd->setObjectToWorldMatrix(_virtualToRoomSpace);
    // This draws a simple piece of geometry using G3D::Draw at the
    // origin of Virtual Space.
    Draw::axes(CoordinateFrame(), rd, Color3::red(), Color3::green(), Color3::blue(), 0.25);

    rd->popState();
  }

protected:
  Table<std::string, CoordinateFrame> _trackerFrames;
  GFontRef          _font;
  MouseToTrackerRef _mouseToTracker;
  CoordinateFrame   _virtualToRoomSpace;
};




int main(int argc, char **argv)
{
  // The first argument to the program tells us which of the known VR
  // setups to start
  std::string setupStr;
  if (argc >= 2) {
    setupStr = std::string(argv[1]);
  }

  // This opens up the graphics window, and starts connections to 
  // input devices, but doesn't actually start rendering yet.
  MyVRApp *app = new MyVRApp(setupStr);

  // This starts the rendering/input processing loop
  app->run();
      
  return 0;
}


Example Demo Makefile

GBUILD_DIR=$(G)/lib

include $(GBUILD_DIR)/Makefile.gbuild.init

ifeq ($(GARCH), linux)
  G_COMPILER_VER = gcc3
endif

include $(GBUILD_DIR)/Makefile.gbuild.compilers

PROJECT_NAME=vrg3d-demo
OBJDIR=obj
SRC=vrg3d-demo.cpp
CSRC=

# See $G/src/G3D/VRG3D/Makefile.lib for a description of DEFINES
# relevant to VRG3D.
DEFINES=NO_SDL_MAIN USE_VRPN USE_SPACENAV
-include Makefile.localdefines

# Version of G3D installed in $G
G3D_VER=-latest


# Different versions of VRPN are used on different machines at Brown.
VRPN_VER=-7.07-b5
#VRPN_VER=-6.05
# Check to see if compiling on a machine whose name starts with cs-
# If so, then assume we're in the Cave and use an older version of VRPN
ifneq ($(findstring cs-, $(HOST)),)
  VRPN_VER=-6.02
endif
ifneq ($(findstring depthcube, $(HOST)),)
  VRPN_VER=-6.02
endif
ifneq ($(findstring audio-cave, $(HOST)),)
  VRPN_VER=-6.02
endif
ifeq ($(GARCH), OSX)
  VRPN_VER=-7.00
endif


G_INCLUDE_DIRS=G3D$(G3D_VER) vrpn$(VRPN_VER) VRG3D
G_LIB_DIRS=.
LIB_DIRS=obj
LIBS =

# Static libs to link in the debugging/opt/profiling cases
DEBUG_LIBS =  VRG3D$(G_COMPILER_SUFFIX)-d GLG3Dd G3Dd
OPT_LIBS   =  VRG3D$(G_COMPILER_SUFFIX)   GLG3D  G3D  
PROF_LIBS  =  VRG3D$(G_COMPILER_SUFFIX)-p GLG3D  G3D




# Architecture-specific settings:

ifeq ($(GARCH),linux)
  G_LIB_DIRS := $(G_LIB_DIRS) G3D$(G3D_VER)-x86-g++-3.4
  DEBUG_LIBS := $(DEBUG_LIBS) SDL
  OPT_LIBS   := $(OPT_LIBS)   SDL
  PROF_LIBS  := $(PROF_LIBS)  SDL
  LIB_DIRS   := $(LIB_DIRS) /usr/X11R6/lib
  LIBS       := dl X11 Xext pthread GLU GL z jpeg png zip $(LIBS)
endif

ifeq ($(GARCH),OSX)
  G_LIB_DIRS := $(G_LIB_DIRS) G3D$(G3D_VER)-x86-g++-4.2
  LIB_DIRS   := $(LIB_DIRS) /usr/X11R6/lib
  LIBS       := dl X11 Xext Xi Xmu pthread GLU z GL jpeg png zip
  DEBUG_LIBS := $(DEBUG_LIBS)
  FRAMEWORKS = Cocoa Carbon SDL OpenGL
  ifneq ($(findstring USE_SPACENAV,$(DEFINES)),) 
    FRAMEWORKS := $(FRAMEWORKS) 3DconnexionClient
  endif
endif

ifeq ($(GARCH),WIN32)
  G_LIB_DIRS := $(G_LIB_DIRS) G3D$(G3D_VER)-x86-vc8.0
  LIBS     := $(LIBS) zlib jpeg png comctl32 user32 gdi32 advapi32 ws2_32
endif




# Adjust settings based on particular DEFINES:

ifneq ($(findstring USE_VRPN,$(DEFINES)),) 
  ifeq ($(GARCH), WIN32)
    DEBUG_LIBS := $(DEBUG_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX) /NODEFAULTLIB:vrpn
    OPT_LIBS   := $(OPT_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX) /NODEFAULTLIB:vrpn
    PROF_LIBS  := $(PROF_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX) /NODEFAULTLIB:vrpn
  else
    DEBUG_LIBS := $(DEBUG_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX) quat$(VRPN_VER)$(G_COMPILER_SUFFIX)
    OPT_LIBS   := $(OPT_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX)  quat$(VRPN_VER)$(G_COMPILER_SUFFIX)
    PROF_LIBS  := $(PROF_LIBS) vrpn$(VRPN_VER)$(G_COMPILER_SUFFIX) quat$(VRPN_VER)$(G_COMPILER_SUFFIX)
  endif
endif


ifneq ($(findstring USE_GLUT,$(DEFINES)),)
  LIBS := $(LIBS) glut
  ifneq ($(GARCH), WIN32)
    LIBS := $(LIBS) Xmu Xi
  endif
else
  ifeq ($(GARCH), WIN32)
    LIBS := $(LIBS) /NODEFAULTLIB:glut32
  endif
endif


ifneq ($(findstring USE_TRACKD,$(DEFINES)),) 
  G_INCLUDE_DIRS := $(G_INCLUDE_DIRS) trackdAPI
  ifeq ($(GARCH), WIN32)
    LIBS := $(LIBS) trackdAPI_MTs
  else
    LIBS := $(LIBS) trackdAPI
  endif
endif


# The cluster-sync library can be used for the server/client connection,
# but it's not the default.
ifneq ($(findstring USE_CLUSTERSYNC,$(DEFINES)),)
  ifeq ($(GARCH),WIN32)
    G_INCLUDE_DIRS := $(G_INCLUDE_DIRS) cluster-sync pthreads-win32
    DEBUG_LIBS :=  $(DEBUG_LIBS) cluster-sync$(G_COMPILER_SUFFIX)-d
    OPT_LIBS   :=  $(OPT_LIBS) cluster-sync$(G_COMPILER_SUFFIX)
    PROF_LIBS  :=  $(PROF_LIBS) cluster-sync$(G_COMPILER_SUFFIX)-p 
    LIBS     := $(LIBS) pthreadVC2
  endif
endif



include $(GBUILD_DIR)/Makefile.gbuild.defines



all: progg

debug: progg

opt: progo


include $(GBUILD_DIR)/Makefile.gbuild.rules


# Override some linker flags to add some options for OSX
ifeq ($(GARCH),OSX)
  DBGLDFLAGS := $(DBGLDFLAGS) -multiply_defined suppress -all_load -arch i386
  OPTLDFLAGS := $(OPTLDFLAGS) -multiply_defined suppress -all_load -arch i386
endif



Author and maintainer Daniel Keefe (dfk@cs.brown.edu)


Generated on Wed Jan 26 06:35:59 2011 for VRG3D by  doxygen 1.5.6