US20140055400A1 - Digital workspace ergonomics apparatuses, methods and systems - Google Patents

Digital workspace ergonomics apparatuses, methods and systems Download PDF

Info

Publication number
US20140055400A1
US20140055400A1 US14/018,370 US201314018370A US2014055400A1 US 20140055400 A1 US20140055400 A1 US 20140055400A1 US 201314018370 A US201314018370 A US 201314018370A US 2014055400 A1 US2014055400 A1 US 2014055400A1
Authority
US
United States
Prior art keywords
user
client
display surface
display
dwe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/018,370
Inventor
Jeffrey Jon Reuschel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PNC Bank NA
Original Assignee
Haworth Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/478,994 external-priority patent/US9430140B2/en
Application filed by Haworth Inc filed Critical Haworth Inc
Priority to US14/018,370 priority Critical patent/US20140055400A1/en
Priority to PCT/US2013/058261 priority patent/WO2014039680A1/en
Priority to PCT/US2013/058249 priority patent/WO2014039670A1/en
Assigned to HAWORTH, INC. reassignment HAWORTH, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REUSCHEL, Jeffrey Jon
Publication of US20140055400A1 publication Critical patent/US20140055400A1/en
Assigned to PNC BANK, NATIONAL ASSOCIATION, AS ADMINISTRATIVE AGENT reassignment PNC BANK, NATIONAL ASSOCIATION, AS ADMINISTRATIVE AGENT COLLATERAL ASSIGNMENT OF PATENTS Assignors: HAWORTH, INC., HAWORTH, LTD. AND SUCCESSORS
Priority to US15/457,752 priority patent/US11740915B2/en
Assigned to HAWORTH, INC., HAWORTH, LTD. reassignment HAWORTH, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: PNC BANK, NATIONAL ASSOCIATION
Priority to US18/217,451 priority patent/US20230350703A1/en
Priority to US18/226,187 priority patent/US11886896B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0383Remote input, i.e. interface arrangements in which the signals generated by a pointing device are transmitted to a PC at a remote location, e.g. to a PC in a LAN
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • HAWT 1001-2 which application claims priority under 35 USC ⁇ 119 to U.S. Provisional Patent Application No. 61/489,238 filed May 23, 2011, entitled “DIGITAL WHITEBOARD COLLABORATION APPARATUSES, METHODS AND SYSTEMS,” Attorney Docket No. HAWT 1001-1.
  • the entire contents of all the aforementioned applications are expressly incorporated by reference herein.
  • the present innovations generally address apparatuses, methods, and systems for digital collaboration, and more particularly, include DIGITAL WORKSPACE ERGONOMICS APPARATUSES, METHODS AND SYSTEMS (“DWE”).
  • users may be required to work collaboratively with each other to achieve efficient results in their undertakings. Such users may sometimes be located remotely from each other. The collaborative interactions between such users may sometimes require communication of complex information.
  • FIGS. 1A-1K shows a block diagram illustrating example aspects of digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 2A-2B show data flow diagrams illustrating an example procedure to initiate a whiteboarding session for a user in some embodiments of the DWE;
  • FIGS. 3A-3B show logic flow diagrams illustrating example aspects of initiating a whiteboarding session for a user in some embodiments of the DWE, e.g., a Whiteboard Collaborator Session Initiation (“WCSI”) component 300 ;
  • WCSI Whiteboard Collaborator Session Initiation
  • FIG. 4 shows a logic flow diagram illustrating example aspects of generating viewport specification for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Client Viewport Specification (“CVS”) component 400 ;
  • CVS Client Viewport Specification
  • FIG. 5 shows a logic flow diagram illustrating example aspects of generating viewport content for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Viewport Content Generation (“VCG”) component 500 ;
  • VCG Viewport Content Generation
  • FIGS. 6A-6C show data flow diagrams illustrating an example procedure to facilitate collaborative whiteboarding among a plurality of users in some embodiments of the DWE;
  • FIGS. 7A-7D show logic flow diagrams illustrating example aspects of facilitating collaborative whiteboarding among a plurality of users in some embodiments of the DWE, e.g., a User Collaborative Whiteboarding (“UCW”) component 700 ;
  • UCW User Collaborative Whiteboarding
  • FIGS. 8A-8I show block diagrams illustrating example aspects of a pie-menu user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 9A-9C show block diagrams illustrating example aspects of a chord-based user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIG. 10 shows a logic flow diagram illustrating example aspects of identifying user gestures of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a User Gesture Identification (“UGI”) component 1000 ;
  • UMI User Gesture Identification
  • FIGS. 11A-11B show block diagrams illustrating example aspects of a whiteboarding telepresence system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 12A-12B show a block diagram and logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE.
  • FIG. 13 shows a block diagram illustrating embodiments of a DWE controller.
  • FIG. 14 illustrates a conventional calculator.
  • FIG. 15 illustrates a traditional Desktop Application User Interface Paradigm.
  • FIGS. 16 , 19 , 36 A, 36 B, 37 - 40 , 57 and 64 each illustrate images on a display with a user or users standing in front.
  • FIGS. 17 , 18 , 20 , 23 - 29 , 49 - 56 and 58 - 63 each illustrate images on a display.
  • FIGS. 21 , 22 , 30 - 35 and 41 - 48 each illustrate images on a display with a user's hand or finger touching it.
  • FIGS. 1A-1K shows a block diagram illustrating example aspects of digital whiteboard collaboration in some embodiments of the DWE.
  • a plurality of users e.g., 101 a - d
  • the users may be scattered across the globe in some instances.
  • Users may utilize a variety of devices in order to collaborate with each other, e.g., 102 a - c .
  • such devices may each accommodate a plurality of users (e.g., device 102 c accommodating users 101 c and 101 d ).
  • the DWE may utilize a central collaboration server, e.g., 105 , and/or whiteboard database, e.g., 106 , to achieve collaborative interaction between a plurality of devices, e.g., 104 a - c .
  • the whiteboard database may have stored a digital whiteboard.
  • a digital collaboration whiteboard may be stored as data in memory, e.g., in whiteboard database 106 .
  • the data may, in various implementations, include image bitmaps, video objects, multi-page documents, scalable vector graphics, and/or the like.
  • the digital collaboration whiteboard may be comprised of a plurality of logical subdivisions or tiles, e.g., 107 aa - 107 mn .
  • the digital whiteboard may be “infinite” in extent. For example, the number of logical subdivisions (tiles) may be as large as needed, subject only to memory storage and addressing considerations. For example, if the collaboration server utilizes 12-bit addressing, then the number of tile may be limited only by the addressing system, and or the amount of memory available in the whiteboard database.
  • each tile may be represented by a directory in a file storage system.
  • a directory may be created in the file system, e.g., 109 a - f .
  • each tile may be comprised of a number of sub-tiles.
  • a level 1 tile e.g., 110
  • each sub-tile may be represented by a sub-folder in the file system, e.g., 113 .
  • tiles at each level may be comprised of sub-tiles of a lower level, thus generating a tree hierarchy structure, e.g., 112 - 114 .
  • a folder representing a tile may be storing a whiteboard object container.
  • a folder may be named according to its tile ID, e.g., 115 .
  • a folder having tile ID [11 02 07 44] may represent the 44th tile at the further level, under the 7th tile at the third level, under the 2nd tile at the second level, under the 11th tile at the first level.
  • such a folder may have stored whiteboard object container(s), e.g., 116 a - d .
  • the contents of the whiteboard object container may represent the contents of the tile in the digital whiteboard.
  • the object container may include files such as, but not limited to: bitmap images, scalable vector graphics (SVG) files, eXtensible Markup Language (XML)/JavaScriptTM object notation files, and/or the like. Such files may include data on objects contained within the digital collaboration whiteboard.
  • each file stored within a tile folder may be named to represent a version number, a timestamp, and/or like identification, e.g., 117 a - d .
  • various versions of each tile may be stored in a tile folder.
  • each tile folder may include sub-folders representing layers of a tile of the digital whiteboard.
  • each whiteboard may be comprised of various layers of tile objects superimposed upon each other.
  • the hierarchical tree structure of folders may be replaced by a set of folders, wherein the file names of the folders represent the tile level and layer numbers of each tile/layer in the digital whiteboard. Accordingly, in such implementations, sub-tile/layer folders need not be stored within their parent folders, but may be stored alongside the parent folders in a flat file structure.
  • a whiteboard object container may include data representing various tile object that may be display on the digital whiteboard.
  • the whiteboard object container may include data standalone videos 121 a (e.g., a link to a stored video), image objects, e.g., 121 b , multi-page documents, e.g., 121 c , freeform objects, e.g., 122 , etc.
  • the whiteboard object container may include a remote window object.
  • a remote window object may comprise a link to another object, e.g., a video, RSS feed, live video stream, client display screen, etc.
  • the link between the remote window object and any other object may be dynamically reconfigurable, e.g., 119 .
  • a remote window-linked object e.g., 120 may be dynamically configured within the space reserved for the remote window within the digital whiteboard.
  • a randomly varying video, contents of an RSS feed may be configured to display within the space reserved for the remote window.
  • object metadata may be associated with each tile object.
  • the metadata associated with a object may include a description of the object, object properties, and/or instructions for the DWE when the object is interrogated by a user (e.g., modified, viewed, clicked on, etc.).
  • an object may have associated XML-encoded data such as the example XML data provided below:
  • a client connected to a whiteboard collaboration session may communicate with the collaboration server to obtain a view of a portion of the digital whiteboard.
  • a client 126 may have associated with it a client viewport, e.g., a portion of the digital whiteboard 127 that is projected onto the client's display, e.g., 128 a .
  • the portion of tile objects, e.g., 129 a extending into the client viewport, e.g., 128 a , of the client, e.g., 126 may be depicted on the display of client 126 .
  • a user may modify the client viewport of the client.
  • the user may modify the shape of the client viewport, and/or the position of the client viewport.
  • the user may provide user input, e.g., touchscreen gestures 130 , to modify the client viewport from its state in 128 a to its state in 128 b .
  • the contents of the viewport may be modified from tile object 129 a to a portion of tile object 131 .
  • the portion of tile object 131 within the extent of the modified client viewport will be displayed on the display of client 126 .
  • the user may modify a tile object, e.g., 129 a into modified tile object 129 b , e.g., via user input 130 .
  • the modified tile object may be displayed on the display of the client 126 .
  • a plurality of users may be utilizing clients to view portions of a digital whiteboard.
  • client 133 a may receive client viewport data 135 a comprising a depiction of the tile objects extending into client viewport 134 a .
  • client 133 b may receive client viewport data 135 b comprising a depiction of the tile objects extending into client viewport 134 b .
  • client 133 c may receive client viewport data 135 c comprising a depiction of the tile objects extending into client viewport 134 c .
  • the client viewports of different client may not overlap (e.g., those of client 133 a and client 133 c ).
  • the client viewports of two or more clients may overlap with each other, e.g., the client viewports 134 b and 134 c of clients 133 b and 133 c .
  • the modification of the tile object may be reflected in all viewports into which the modified portion of the tile object extends.
  • a plurality of users may simultaneously observe the modification of a tile objects made by another user, facilitating collaborative editing of the tile objects.
  • a user may utilize a client, e.g., 137 , to observe the modifications to a portion of a digital whiteboard across time/versions.
  • a user may position the client's viewport, e.g., 138 , over a portion of the digital whiteboard (e.g., via user gestures into the client 137 ), and observe a time/version-evolution animation, e.g., 139 , of that portion of the digital whiteboard on the client device's display using (time-stamped) versions, e.g., 136 a - d , of the digital whiteboard.
  • time/version-evolution animation e.g., 139
  • FIGS. 2A-2B show data flow diagrams illustrating an example procedure to initiate a whiteboarding session for a user in some embodiments of the DWE.
  • a user e.g., 201
  • the user may utilize a client, e.g., 202 , to join the digital whiteboarding collaboration session.
  • the client may be a client device such as, but not limited to, cellular telephone(s), smartphone(s) (e.g., iPhone®, Blackberry®, Android OS-based phones etc.), tablet computer(s) (e.g., Apple iPadTM, HP SlateTM, Motorola XoomTM, etc.), eBook reader(s) (e.g., Amazon KindleTM, Barnes and Noble's NookTM eReader, etc.), laptop computer(s), notebook(s), netbook(s), gaming console(s) (e.g., XBOX LiveTM, Nintendo® DS, Sony PlayStation® Portable, etc.), portable scanner(s) and/or the like.
  • smartphone(s) e.g., iPhone®, Blackberry®, Android OS-based phones etc.
  • tablet computer(s) e.g., Apple iPadTM, HP SlateTM, Motorola XoomTM, etc.
  • eBook reader(s) e.g., Amazon KindleTM, Barnes and Noble's NookTM eReader, etc.
  • the user may provide collaborate request input, e.g., 211 , into the client, indicating the user's desire to join the collaborative whiteboarding session.
  • the user input may include, but not be limited to: keyboard entry, mouse clicks, depressing buttons on a joystick/game console, (3D; stereoscopic, time-of-flight 3D, etc.) camera recognition (e.g., motion, body, hand, limb, facial expression, gesture recognition, and/or the like), voice commands, single/multi-touch gestures on a touch-sensitive interface, touching user interface elements on a touch-sensitive display, and/or the like.
  • the user may utilize user touchscreen input gestures such as, but not limited to, the gestures depicted in FIGS.
  • the client may identify the user collaborate request input.
  • the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10 .
  • UMI User Gesture Identification
  • the client may generate and provide a user whiteboard request, e.g., 212 , to a server, e.g., collaboration server 203 .
  • the client may provide a (Secure) HyperText Transport Protocol (“HTTP(S)”) POST message with a message body encoded according to the eXtensible Markup Language (“XML”) and including the user collaborate request input information.
  • HTTP(S) POST message is provided below:
  • the server may parse the user whiteboarding request, and extract user credentials, e.g., 213 , from the user whiteboarding request. Based on the extracted user credentials, the server may generate an authentication query, e.g., 214 , for a database, e.g., users database 204 . For example, the server may query whether the user is authorized to join the collaborative whiteboarding session. For example, the server may execute a hypertext preprocessor (“PHP”) script including structured query language (“SQL”) commands to query the database for whether the user is authorized to join the collaborative whiteboarding session.
  • PHP hypertext preprocessor
  • SQL structured query language
  • the database may provide, e.g., 215 , an authentication response to the server.
  • the server may determine, based on the authentication response, that the user is authorized to join the collaborative whiteboarding session.
  • the server may parse the user whiteboarding request and/or the authentication response, and obtain client specifications for the client 202 .
  • the server may extract client specifications including, but not limited to: display size, resolution, orientation, frame rate, contrast ratio, pixel count, color scheme, aspect ratio, 3D capability, and/or the like.
  • client specifications including, but not limited to: display size, resolution, orientation, frame rate, contrast ratio, pixel count, color scheme, aspect ratio, 3D capability, and/or the like.
  • the server may generate a query for tile objects that lie within the viewport of the client.
  • the server may provide a tile objects query, e.g., 219 , to a database, e.g., whiteboard database 205 , requesting information on tile objects which may form part of the client viewport content displayed on the client 202 .
  • the server may provide the tile IDs of the tiles which overlap with the client viewport, and request a listing of tile object IDs and tile object data for object which may partially reside within the tile IDs.
  • An example PHP/SQL command listing for querying a database for tile objects data within a single tile ID is provided below:
  • the database may, in response to the tile objects query 219 , provide the requested tile objects data, e.g., 220 .
  • the database may provide a data structure representative of a scalable vector illustration, e.g., a Scalable Vector Graphics (“SVG”) data file.
  • the data structure may include, for example, data representing a vector illustration.
  • the data structure may describe a scalable vector illustration having one or more objects in the illustration.
  • Each object may be comprised of one or more paths prescribing, e.g., the boundaries of the object.
  • each path may be comprised of one or more line segments. For example, a number of very small line segments may be combined end-to-end to describe a curved path.
  • a plurality of such paths may be combined in order to form a closed or open object.
  • Each of the line segments in the vector illustration may have start and/or end anchor points with discrete position coordinates for each point.
  • each of the anchor points may comprise one or more control handles.
  • the control handles may describe the slope of a line segment terminating at the anchor point.
  • objects in a vector illustration represented by the data structure may have stroke and/or fill properties specifying patterns to be used for outlining and/or filling the object. Further information stored in the data structure may include, but not be limited to: motion paths for objects, paths, line segments, anchor points, etc.
  • the data structure including data on the scalable vector illustration may be encoded according to the open XML-based Scalable Vector Graphics “SVG” standard developed by the World Wide Web Consortium (“W3C”).
  • SVG Scalable Vector Graphics
  • W3C World Wide Web Consortium
  • An exemplary XML-encoded SVG data file, written substantially according to the W3C SVG standard, and including data for a vector illustration comprising a circle, an open path, a closed polyline composed of a plurality of line segments, and a polygon, is provided below:
  • the server may generate client viewport data (e.g., bitmap, SVG file, video stream, RSS feed, etc.) using the tile objects data and client viewport specifications, e.g. 223 .
  • the server may provide the generated client viewport data and client viewport specifications as whiteboard session details and client viewport data, e.g., 224 .
  • the client may render, e.g. 225 , the visualization represented in the client viewport data for display to the user.
  • the client may be executing an Adobe® Flash object within a browser environment including ActionScriptTM 3.0 commands to render the visualization represented in the data structure, and display the rendered visualization for the user.
  • ActionScriptTM 3.0 Exemplary commands, written substantially in a form adapted to ActionScriptTM 3.0, for rendering a visualization of a scene within an Adobe® Flash object with appropriate dimensions and specified image quality are provided below:
  • the client may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 226 , in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • the DWE may contain a library of pre-rendered images and visual objects indexed to be associated with one or more of search result terms or phrases, such as Clip Art files, e.g., accessible through Microsoft® PowerPoint application software.
  • FIGS. 3A-3B show logic flow diagrams illustrating example aspects of initiating a whiteboarding session for a user in some embodiments of the DWE, e.g., a Whiteboard Collaborator Session Initiation (“WCSI”) component 300 .
  • WCSI Whiteboard Collaborator Session Initiation
  • a user may desire to join a collaborative whiteboarding session on a digital whiteboard.
  • the user may utilize a client to join the digital whiteboarding collaboration session.
  • the user may provide collaborate request input, e.g., 301 , into the client, requesting that the user join the whiteboarding session (e.g., via a whiteboarding app installed and/or executing on the client, such as an iPhone®/iPad® app, Adobe® Flash object, JavaScriptTM code executing within a browser environment, application executable (*.exe) file, etc.).
  • the client may identify the user collaborate request input.
  • the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10 .
  • UMI User Gesture Identification
  • the client may generate and provide a user whiteboard request, e.g., 302 , to a collaboration server.
  • the collaboration server may parse the user whiteboarding request and extract user credentials, e.g., 303 .
  • Example parsers that the server may utilize are described further below in the discussion with reference to FIG. 13 .
  • the server may generate an authentication query, e.g., 304 , for a users database, e.g., by executing PHP/SQL commands similar to the examples above.
  • the database may provide an authentication response, e.g., 305 .
  • the server may parse the obtained authentication response, and extract the authentication status of the user/client, e.g., 306 . If the user is not authenticated, e.g., 307 , option “No,” the server may generate a login failure message, and/or may initiate an error handling routine, e.g., 308 .
  • the server may generate a collaborator acknowledgment, e.g., 309 , for the user/client.
  • the client may obtain the server's collaborator acknowledgment, e.g., 310 , and in some implementations, display the acknowledgment for the user, e.g., 311 .
  • the server may parse the user whiteboarding request and/or the authentication response, and obtain client specifications for the client. For example, the server may extract client specifications including, but not limited to: display size, resolution, orientation, frame rate, contrast ratio, pixel count, color scheme, aspect ratio, 3D capability, and/or the like, using parsers such as those described further below in the discussion with reference to FIG. 13 .
  • the server may generate client viewport specifications using the specifications of the client.
  • the server may utilize a component such as the example client viewport specification component 400 discussed further below with reference to FIG. 4 .
  • the server may generate a query for tile objects that lie within the viewport of the client.
  • the server may provide a tile objects query, e.g., 314 , to a whiteboard database 205 , requesting information on tile objects which may form part of the client viewport content displayed on the client.
  • the server may provide the tile IDs of the tiles which overlap with the client viewport, and request a listing of tile object IDs and tile object data for object which may partially reside within the tile IDs.
  • the database may, in response to the tile objects query 314 , provide the requested tile objects data, e.g., 315 .
  • the server may generate a whiteboard session object, e.g., 316 , using the client viewport specifications and/or the tile objects data.
  • the server may store the whiteboard session object to a database, e.g., 317 .
  • the server may generate client viewport data (e.g., bitmap, SVG file, video stream, RSS feed, etc.) using the tile objects data and client viewport specifications, e.g. 318 .
  • the server may provide the generated client viewport data and client viewport specifications, e.g., 319 , to the client.
  • the client may render, e.g.
  • FIG. 4 shows a logic flow diagram illustrating example aspects of generating viewport specification for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Client Viewport Specification (“CVS”) component 400 .
  • a DWE component e.g., a collaboration server
  • the request may be in the form of a HTTP(S) POST message with XML-encoded message body, similar to the examples provided above.
  • the DWE may parse the request, and extract a client ID from the request.
  • the DWE may generate a query, e.g., 403 , for existing client viewport specifications associated with the client ID.
  • the DWE may utilize PHP/SQL commands to query a database, similar to the examples provided above. If an existing client viewport specification is available for the given client ID, e.g., 404 , option “Yes,” the DWE may obtain the existing client viewport specification, e.g., for a database.
  • the DWE may parse the request, and extract any operations required to be performed on the existing client viewport specification (e.g., if the request is for updating the client viewport specification).
  • the request may include a plurality of client viewport modification instructions (e.g., convert viewport from rectangular shape to circular shape, modify the zoom level of the viewport, modify the aspect ratio of the viewport, modify the position of the viewport, etc.).
  • the DWE may select each instruction, e.g., 407 , and calculate an updated client viewport specification based on the instruction using the previous version of the client viewport specification, e.g., 408 .
  • the DWE may operate on the client viewport specifications using each of the instructions, e.g., 409 , until all client viewport modification operations have been performed, e.g., 409 , option “No.” In some implementations, the DWE may return the updated client viewport specifications, e.g., 413 .
  • the DWE may determine that there are no existing client viewport specifications.
  • the DWE may generate client viewport specification data variables, e.g., display size, resolution, shape, aspect ratio, zoom level, [x,y] position, whiteboard layers visible, etc., e.g., 410 .
  • the DWE may initially set default values for each of the client viewport specification variables.
  • the DWE may obtain the client device specifications (e.g., client's display monitor size, pixel count, color depth, resolution, etc.), e.g., 411 .
  • the DWE may calculate updated client viewport specification using the client device specifications and the default values set for each of the client viewport specification variables.
  • the DWE may return the calculated updated client viewport specifications, e.g., 413 .
  • FIG. 5 shows a logic flow diagram illustrating example aspects of generating viewport content for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Viewport Content Generation (“VCG”) component 500 .
  • a component of the DWE e.g., collaboration server
  • the DWE may parse the request, and extract a client ID from the request, e.g., 502 .
  • the DWE may generate a query, e.g., 503 , for client viewport specifications associated with the client ID.
  • the DWE may utilize PHP/SQL commands to query a database, similar to the examples provided above.
  • the DWE may obtain the existing client viewport specification, e.g., from a database, e.g., 504 .
  • the DWE may determine tile IDs of whiteboard tiles that overlap with the client viewport of the client, e.g., 505 .
  • the DWE may calculate the extent of the client viewport using the client viewport specifications (e.g., position coordinates and length/width). Based on the extent of the client viewport, the DWE may determine which of the tile the client viewport extends into, and obtain the tile IDs of the determined whiteboard tiles.
  • the DWE may obtain tile object data for all tile objects that lie within the tile IDs into which the client viewport extends. For example, the DWE may query, e.g., 506 , for tile objects data of all tile objects that extend into tiles that the client viewport also extends into. For example, the DWE may obtain such data from a database, e.g., 507 . In some implementations, using the tile objects data, the DWE may generate a rendered bitmap of the tiles corresponding to the determined tile IDs using the tile objects data, e.g., 508 . In alternate implementations, the DWE may generate SVG files, video, documents, and/or the like, objects that may be displayed on the client's display monitor.
  • the DWE may query, e.g., 506 , for tile objects data of all tile objects that extend into tiles that the client viewport also extends into. For example, the DWE may obtain such data from a database, e.g., 507 .
  • the DWE may generate a rendered
  • the DWE may determine a portion of the rendered bitmap that overlaps with the client viewport, based on the client viewport specifications, e.g., 509 .
  • the DWE may extract the determined portion of the rendered bitmap, e.g., 510 , and provide the portion as updated client viewport data to the client, e.g., 511 .
  • FIGS. 6A-6C show data flow diagrams illustrating an example procedure to facilitate collaborative whiteboarding among a plurality of users in some embodiments of the DWE.
  • a user e.g., 601 a
  • the user may desire to modify the contents of a digital whiteboard (e.g., one of a plurality of digital whiteboards) included within the collaborative whiteboarding session.
  • the user may utilize a client, e.g., 602 a , to participate in the digital whiteboarding collaboration session.
  • the user may provide whiteboard input, e.g., 611 , into the client, indicating the user's desire to modify the collaborative whiteboarding session (e.g., modify the contents of a digital whiteboard; modify a participating client's view of a digital whiteboard, etc.).
  • modify the collaborative whiteboarding session e.g., modify the contents of a digital whiteboard; modify a participating client's view of a digital whiteboard, etc.
  • the whiteboard input may include, but not be limited to: keyboard entry, mouse clicks, depressing buttons on a joystick/game console, (3D; stereoscopic, time-of-flight 3D, etc.) camera recognition (e.g., motion, body, hand, limb, facial expression, gesture recognition, and/or the like), voice commands, single/multi-touch gestures on a touch-sensitive interface, touching user interface elements on a touch-sensitive display, and/or the like.
  • the user may utilize user touchscreen input gestures such as, but not limited to, the gestures depicted in FIGS. 8A-8I and FIGS. 9A-9C .
  • the client may capture the user's whiteboard input, e.g., 612 .
  • the client may identify the user's whiteboard input in some implementations.
  • the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10 , to identify gesture(s) made by the user on a touchscreen display of the client to modify the collaborative whiteboarding session.
  • UMI User Gesture Identification
  • the client may generate and provide a whiteboard input message, e.g., 613 , to a server, e.g., collaboration server 603 .
  • the client may provide a (Secure) HyperText Transport Protocol (“HTTP(S)”) POST message with an XML-encoded message body including the user whiteboard input and/or identified user gesture(s).
  • HTTP(S) Secure HyperText Transport Protocol
  • An example of such a HTTP(S) POST message is provided below:
  • the server may parse the user whiteboard input, and extract the user ID, client ID, and/or user gestures from the whiteboard input message, e.g., 614 . Based on the extracted information, the server may generate a whiteboard session query, e.g., 615 , for the gesture context, e.g., the viewport content of the client 602 a being used by the user. For example, the server may query a database, e.g., whiteboard database 605 , for the client viewport specifications and tile objects corresponding to the client viewport specifications.
  • a database e.g., whiteboard database 605
  • An example PHP/SQL command listing for querying a database for client viewport specifications, and tile objects data within a single tile ID, is provided below:
  • the database may, in response to the whiteboard session query, provide the requested client viewport specifications and tile objects data, e.g., whiteboard session object 616 .
  • the database may provide an SVG data file representing the tile objects and/or an XML data file representing the client viewport specifications.
  • the server may determine the user's intended instructions based on the user's gestures and the gesture context, e.g., as retrieved from the database.
  • the user's intended instructions may depend on the context within which the user gestures were made.
  • each user gesture may have a pre-specified meaning depending on the type of tile object upon which the user gesture was made.
  • a particular user gesture may have a pre-specified meaning depending on whether the object above which the gesture was made was a video, or a multi-page document.
  • the tile object on which the gesture was made may include gesture/context interpretation instructions, which the server may utilize to determine the appropriate instructions intended by the user.
  • the server and/or databases may have stored gesture/context interpretation instructions for each type of object (e.g., image, SVG vector image, video, remote window, etc.), and similar user instructions may be inferred from a user gesture above all objects of a certain type.
  • object e.g., image, SVG vector image, video, remote window, etc.
  • the server may extract the user gesture context, e.g., 617 , from the user whiteboard session object.
  • the server may query a database, e.g., gestures database 606 , for user instructions lookup corresponding to the user gestures and/or user gesture context.
  • a database e.g., gestures database 606
  • An example PHP/SQL command listing for querying a database for user instruction lookup is provided below:
  • the database may, in response to the user instruction lookup request, provide the requested user instruction lookup response, e.g., 619 .
  • the server may also query, e.g., 621 , for tile objects within the client's viewport (e.g., using PHP/SQL commands similar to the examples above), and obtain, e.g., 622 , from the whiteboard database 605 , the tile objects data pertaining to tile objects within the viewport of the client.
  • the server may parse the user instruction lookup response and extract instructions to execute from the response.
  • the user instruction lookup response may include instructions to modify tile objects and/or instructions to modify the client viewport(s) of client(s) in the whiteboarding session.
  • the server may extract tile object modification instructions, e.g., 623 , and generate updated tile objects based on the existing tile object data and the extract tile object modification instructions.
  • the server may parse the user instruction lookup response and extract instructions to modify the viewport of client(s).
  • the server may generate, e.g., 624 , updated client viewport specifications and/or client viewport data using the updated tile objects, existing client viewport specifications, and/or extracted client viewport modification instructions.
  • the server may query (e.g., via PHP/SQL commands) for clients whose viewport contents should be modified to account for the modification of the tile objects and/or client viewport specifications, e.g., 625 .
  • the server may provide, e.g., 626 , the query to the whiteboard database, and obtain, e.g., 627 , a list of clients whose viewport contents have been affected by the modification.
  • the server may refresh the affected clients' viewports.
  • the server may generate, for each affected client, updated client viewport specifications and/or client viewport content using the (updated) client viewport specifications and/or (updated) tile objects data, e.g., 629 .
  • the server may store, e.g., 630 - 631 , the updated tile objects data and/or updated client viewport specifications (e.g., via updated whiteboard session objects, updated client viewport data, etc.).
  • the server may provide the (updated) whiteboard session details and/or (updated) client viewport data, e.g., 632 a - c , to each of the affected client(s), e.g., 601 a - c .
  • the client(s) may render, e.g. 633 a - c , the visualization represented in the client viewport data for display to the user, e.g., using data and/or program module(s) similar to the examples provided above with reference to FIG. 2 .
  • the client(s) may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 633 a - c , in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • FIGS. 7A-7D show logic flow diagrams illustrating example aspects of facilitating collaborative whiteboarding among a plurality of users in some embodiments of the DWE, e.g., a User Collaborative Whiteboarding (“UCW”) component 700 .
  • a user may desire to collaborate with other users in a collaborative whiteboarding session.
  • the user may desire to modify the contents of a digital whiteboard (e.g., one of a plurality of digital whiteboards) included within the collaborative whiteboarding session.
  • a digital whiteboard e.g., one of a plurality of digital whiteboards
  • the user may provide whiteboard input, e.g., 701 , within a whiteboarding session into the client, indicating the user's desire to modify the collaborative whiteboarding session (e.g., modify the contents of a digital whiteboard; modify a participating client's view of a digital whiteboard, etc.).
  • the client may capture the user's whiteboard input.
  • the client may identify the user's whiteboard input in some implementations, e.g., 702 .
  • the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10 , to identify gesture(s) made by the user on a touchscreen display of the client to modify the collaborative whiteboarding session.
  • UMI User Gesture Identification
  • the client may generate and provide a whiteboard input message, e.g., 703 , to a collaboration server.
  • the server may parse the user whiteboard input, and extract the user ID, client ID, etc. from the whiteboard input message, e.g., 704 . Based on the extracted information, the server may generate a whiteboard session query, e.g., 705 , for the gesture context, e.g., the viewport content of the client being used by the user.
  • a database may, in response to the whiteboard session query, provide the requested client viewport specifications and tile objects data, e.g., whiteboard session object 706 .
  • the database may provide an SVG data file representing the tile objects and/or an XML data file representing the client viewport specifications.
  • the server may parse the whiteboard session object, and extract user context, e.g., client viewport specifications, tile object IDs of tile objects extending into the client viewport, client app mode (e.g., drawing/editing/viewing, etc., portrait/landscape, etc.), e.g., 707 .
  • the server may parse the whiteboard session object and extract user gesture(s) made by the user into the client during the whiteboard session, e.g., 708 .
  • the server may attempt to determine the user's intended instructions based on the user's gestures and the gesture context, e.g., as retrieved from the database. For example, the user's intended instructions may depend on the context within which the user gestures were made.
  • each user gesture may have a pre-specified meaning depending on the type of tile object upon which the user gesture was made.
  • a particular user gesture may have a pre-specified meaning depending on whether the object above which the gesture was made was a video, or a multi-page document.
  • the tile object on which the gesture was made may include custom object-specific gesture/context interpretation instructions, which the server may utilize to determine the appropriate instructions intended by the user.
  • the server and/or databases may have stored system-wide gesture/context interpretation instructions for each type of object (e.g., image, SVG vector image, video, remote window, etc.), and similar user instructions may be inferred from a user gesture above all objects of a certain type.
  • the server may query a whiteboard database for user instructions lookup corresponding to the user gestures and/or user gesture context, e.g., 709 .
  • the database may, in response to the user instruction lookup request, provide the requested user instruction lookup response, e.g., 710 .
  • the server may also query for tile objects within the client's viewport the tile objects data pertaining to tile objects within the viewport of the client.
  • the server may parse the user instruction lookup response and extract instructions to execute from the response, e.g., 711 .
  • the user instruction lookup response may include instructions to modify tile objects and/or instructions to modify the client viewport(s) of client(s) in the whiteboarding session.
  • the server may extract tile object modification instructions, e.g., 712 .
  • the server may modify tile object data of the tile objects in accordance with the tile object modifications instructions.
  • the server may select a tile object modification instruction, e.g., 714 .
  • the server may parse the tile object modification instruction, and extract object IDs of the tile object(s) to be operated on, e.g., 715 .
  • the server may determine the operations to be performed on the tile object(s).
  • the server may generate a query for the tile object data of the tile object(s) to be operated on, e.g., 716 , and obtain the tile object data, e.g., 717 , from a database.
  • the server may generate updated tile object data for each of the tile objects operated on, using the current tile object data and the tile object modification operations from the tile modification instructions, e.g., 718 .
  • the server may store the updated tile object data in a database, e.g., 719 .
  • the server may repeat the above procedure until all tile object modification instructions have been executed, see, e.g., 713 .
  • the server may parse the user instruction lookup response, e.g., 720 , and extract client viewport modification instructions, e.g., 721 .
  • the server may modify client viewport specifications of the client(s) in accordance with the viewport modifications instructions. For example, the server may select a viewport instruction, e.g., 723 .
  • the server may parse the viewport modification instruction, and extract client IDs for which updated viewport specifications are to be generated, e.g., 724 .
  • the server may determine the operations to be performed on the client viewport specifications.
  • the server may generate a whiteboard object query for the viewport specifications to be operated, e.g., 725 , and obtain the whiteboard session object including the viewport specifications, e.g., 726 , from a database.
  • the server may generate updated client viewport specifications for each of the client viewports being operated on, using the current client viewport specifications and the viewport modification operations from the viewport modification instructions, e.g., 727 .
  • the server may utilize a component such as the client viewport specification component 400 described with reference to FIG. 4 .
  • the server may store the updated client viewport specifications via an updated whiteboard specification object in a database, e.g., 728 .
  • the server may repeat the above procedure until all tile object modification instructions have been executed, see, e.g., 722 .
  • the server may query (e.g., via PHP/SQL commands) for clients whose viewport contents should be modified to account for the modification of the tile objects and/or client viewport specifications, e.g., 729 - 730 .
  • the server may provide the queries to the whiteboard database, and obtain, e.g., 731 , a list of clients whose viewport contents have been affected by the modification.
  • the server may refresh the affected clients' viewports.
  • the server may generate, e.g., 732 , for each affected client, updated client viewport specifications and/or client viewport content using the (updated) client viewport specifications and/or (updated) tile objects data.
  • the server may utilize a component such as the viewport content generation component 500 described with reference to FIG. 5 .
  • the server may store, e.g., 733 , the updated tile objects data and/or updated client viewport specifications (e.g., via updated whiteboard session objects, updated client viewport data, etc.).
  • the server may provide the (updated) whiteboard session details and/or (updated) client viewport data, e.g., 734 , to each of the affected client(s).
  • the client(s) may render, e.g., 735 , the visualization represented in the client viewport data for display to the user, e.g., using data and/or program module(s) similar to the examples provided above with reference to FIG. 2 .
  • the client(s) may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 736 , in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • FIGS. 8A-8I show block diagrams illustrating example aspects of a pie-menu user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE.
  • the DWE may provide a variety of features for the user when the user provides input gestures into a client device involved in a digital collaborative whiteboarding session. For example, under a main menu 801 , the DWE may provide a variety of palette/drawing tools 802 , library tools 803 and/or mini-map/finder tools 804 .
  • the DWE may provide a variety of palette/drawing tools, including but not limited to: colors 802 a , stroke type 802 b , precision drawing mode 802 c , eraser 802 d , cut 802 e , effects 802 f , styles 802 g , tags 802 h , undo feature 802 i , and/or the like.
  • the DWE may provide library tools such as, but not limited to: import/open file 803 a , access clipboard 803 b , and/or the like 803 c .
  • the DWE may provide mini-map/finder tools such as, but not limited to: zoom 804 a , collaborators 804 b , bookmarks 804 c , timeline view 804 d , and/or the like.
  • a user may access a main menu by pressing the touchscreen with a single finger, e.g., 805 .
  • a menu such as a pie menu, e.g., 807 , may be provided for the user when the user attempts to access the main menu by pressing a single finger on the touchscreen, e.g., 806 .
  • the user may press a stylus against the touchscreen, e.g., 808 .
  • the menu options provided to the user may vary depending on whether the user uses a single finger touch or a single stylus touch.
  • a user may access a drawing menu by swiping down on the touchscreen with three fingers, e.g., 809 .
  • a menu such a drawing menu, e.g., 811
  • a drawing palette may include a variety of tools.
  • the drawing palette may include a drawing tool selector, e.g., 811 , for selecting tools from a drawing palette.
  • a variety of commonly used drawing tools may be provided separately for the user to easily access. For example, an eraser tool, 811 a , cut tool 811 b , tag tool 811 c , help tool 811 d , and/or the like may be provided as separate user interface objects for the user to access.
  • a user may select a color from a color picker tool within the drawing palette menu. For example, the user may swipe three fingers on the touchscreen to obtain the drawing palette, e.g., 812 . From the drawing palette, the user may select a color picker by tapping on an active color picker, e.g., 813 . Upon tapping the color picker, a color picker menu, e.g., 814 may be provided for the user via the user interface.
  • a color picker menu e.g., 814 may be provided for the user via the user interface.
  • a user may tag an object within the digital whiteboard, e.g., 815 .
  • the user may tap on a user interface element, e.g., 816 .
  • the user may be provided with a virtual keyboard 818 , as well as a virtual entry form 817 for the user to type a tag into via the virtual keyboard.
  • a user may enter into a precision drawing mode, wherein the user may be able to accurately place/draw tile objects.
  • the user may place two fingers on the touchscreen and hold the finger positions.
  • the user may be provided with precision drawing capabilities.
  • the user may be able to precisely draw a line to the length, orientation and placement of the user's choosing, e.g., 820 .
  • the user may be able to draw precise circles, e.g., 821 , rectangles, e.g., 822 , and/or the like.
  • the precision of any drawing tool provided may be enhanced by entering into the precision drawings mode by using the two-finger hold gesture.
  • a user may be able to toggle between an erase and draw mode using a two-finger swipe. For example, if the user swipes downwards, an erase mode may be enabled, e.g., 824 , while if the user swipes upwards, the draw mode may be enabled, e.g., 825 .
  • a user may be able to an overall map of the whiteboard by swiping all five fingers down simultaneously, e.g., 826 .
  • a map of the digital whiteboard e.g., 828 may be provided for the user.
  • the user may be able to zoom in or zoom out of a portion of the digital whiteboard by using two fingers, and moving the two fingers either together (e.g., zoom out) or away from each other (e.g., zoom in), see, e.g., 829 .
  • a variety of features and/or information may be provided for the user.
  • the user may be provided with a micromap, which may provide an indication of the location of the user's client viewport relative to the rest of the digital whiteboard.
  • the user may be provided with information on other users connected to the whiteboarding session, objects within the whiteboard, tags providing information on owners of objects in the whiteboard, etc., a timeline of activity showing the amount of activity as a function of time, and/or the like information and/or features.
  • FIGS. 9A-9C show block diagrams illustrating example aspects of a chord-based user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE.
  • a chord-based gesture system may utilize a number of variables to determine the meaning of a user gesture, e.g., the intentions of a user to instruct the DWE.
  • variables such as, but not limited to: number of fingers/styli inputs in the chord 901 , pressure and area of application of pressure on each chord element 902 , contextual information about the object underneath the chord 903 , displacement, velocity, direction of the chord movement 904 , timing associated with the chord (e.g., length of hold, pause, frequency/duty cycle of tapping, etc.), and/or the like, may affect the interpretation of what instructions are intended by a gesture made by the user.
  • chords of various types may be utilized to obtain menus, perform drawing, editing, erasing features, modify the view of the client, find editing collaborators, and/or the like, see, e.g., 906 .
  • displacing a single finger of an empty portion of the screen may automatically result in a draw mode, and a line may be drawn on the screen following the path of the single finger, e.g., 907 .
  • holding a finger down and releasing quickly may provide a precision drawing mode, wherein when a finger is drawn along the screen, a line may be drawn with high precision following the path of the finger, e.g., 908 - 909 .
  • holding a finger down and releasing after a longer time may provide menu instead of a precision drawing mode, e.g., 910 .
  • an eraser tool may be provided underneath the position of the three-finger chord.
  • an eraser tool may also be displaced underneath the chord, thus erasing (portion of) objects over which the chord is passed by the user, e.g., 911 .
  • a zoom tool may be provided for the user. The user may then place two fingers down on the screen, and move the fingers together or away from each other to zoom out or zoom in respectively, e.g., 912 .
  • this may provide the user with a tool to select an object on the screen, and modify the object (e.g., modify the scale, aspect ratio, etc. of the object), e.g., 913 .
  • modify the object e.g., modify the scale, aspect ratio, etc. of the object
  • the user may be provided with a pan function, e.g., 914 .
  • a zoon and/or overview selection user interface element e.g., 915 .
  • various gesture features may be provided depending on the attributes of the chord, including, but not limited to: the number of chord elements, timing of the chord elements, pressure/area of the chord elements, displacement/velocity/acceleration/orientation of the chord elements, and/or the like.
  • FIG. 10 shows a logic flow diagram illustrating example aspects of identifying user gestures of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a User Gesture Identification (“UGI”) component 1000 .
  • a user may provide input (e.g., one of more touchscreen gestures) into a client, e.g., 1001 .
  • the client may obtain the user input raw data, and identify a chord based on the raw data. For example, the client may determine the number of fingers pressed onto the touchscreen, whether a stylus was incorporated in the user touch raw data, which fingers of the user were pressed onto the touchscreen, and/or the like, e.g., 1002 .
  • the client may determine the spatial coordinates of each of the chord elements (e.g., wherein each simultaneous finger/stylus touch is a chord element of the chord comprised of the finger/stylus touches), e.g., 1003 .
  • the client may determine the [x,y] coordinates for each chord element.
  • the client may determine the touch screen pressure for each chord element, area of contact for each chord element (e.g., which may also be used to determine whether a chord element is a finger or a stylus touch, etc.), e.g., 1004 .
  • the client may determine time parameters for each chord element of the chord, e.g., 1005 .
  • the client may determine such parameters such as hold duration, touch frequency, touch interval, pause time, etc. for each chord element of the chord and/or an average time for each such parameter for the entire chord.
  • the client may determine motion parameters for each chord element of the chord, e.g., 1006 .
  • the client may determine displacement, direction vector, acceleration, velocity, etc. for each chord element of the chord.
  • the client may determine whether the chord gesture is for modifying a client view, or for modifying a tile object present in a digital whiteboard.
  • the client may generate a query (e.g., of a database stored in the client's memory) to determine whether the identified chord operates on the client viewport or tile objects.
  • the client may generate a query for a gesture identifier, and associated instructions using the chord, spatial location, touchscreen pressure, time parameters, motion parameters, and/or the like. If the client determines that the chord operates on tile object(s), e.g., 1008 , option “No,” the client may identify the tile object(s) affected by the user input using the location and motion parameters for the chord elements, e.g., 1010 . The client may determine whether the tile object(s) has any associated context/gesture interpretation instructions/data, e.g., 1011 .
  • the client may utilize system-wide context interpretation instructions based on the object type of the tile object, e.g., 1013 . If the object has custom context instructions, e.g., 1012 , option “Yes,” the client may obtain the customer object-specific context interpretation instructions, e.g., 1014 . In some implementations, the client may determine the gesture identifier and associated instructions using the chord, spatial location, touchscreen pressure, time parameters and motion parameters, as well as object/system-specified context interpretation instructions, e.g., 1015 , and may return the user gesture identifier and associated gesture instructions, e.g., 1016 . It is to be understood that any of the actions recited above may be performed by the client and/or any other entity and/or component of the DWE.
  • FIGS. 11A-11B show block diagrams illustrating example aspects of a whiteboarding telepresence system for digital whiteboard collaboration in some embodiments of the DWE.
  • a plurality of users may be collaborating with each other, for example, via a digital whiteboard collaboration system as described above.
  • the users may be interacting with each other via other communication and/or collaboration systems.
  • a user e.g., 1101 a
  • the user 1101 a may be utilizing a touchscreen interface, e.g., 1102 a
  • user 1101 b may be utilizing touchscreen interface 1102 b .
  • the touchscreen interfaces may be operating in conjunction with other DWE components to provide a digital whiteboard collaboration experience for the users.
  • the user may utilize a telepresence system, e.g., 1103 a - b , to enhance the collaborative session between the users.
  • a user 1101 a may be able to visualize 1101 b via the telepresence system.
  • the user 1101 a may be able to hear (e.g., via a speaker system) and see (e.g., via auxiliary display) user 1101 b .
  • the user 1101 a may also be able to speak to user 1101 b via a microphone, and may be able to provide a video of himself (e.g., via a camera).
  • user 1101 b may be able to see and hear user 1101 , and provide audio and video to user 1101 a via user 1101 b 's telepresence interface.
  • users utilizing different types of device may interactively collaborate via a telepresence system.
  • user 1104 a may be utilizing a large-screen touch interface, e.g., 1105 a
  • a user 1104 b may be utilizing a portable device, e.g., 1105 b
  • the user interface of the collaborative session, as well as the telepresence system may be modified according to the device being used by the user in the collaborative session.
  • the user 1104 a utilizing the large-screen touch interface 1105 a , may be utilizing an auxiliary telepresence system 1106 a .
  • the user 1104 b may, however, be utilizing a telepresence system inbuilt into the device, e.g., 1106 b . Accordingly, in some implementations, the users may interact with each other via telepresence for collaborative editing across a variety of user devices.
  • FIGS. 12A-12B show a block diagram and logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE.
  • the DWE may provide a multi-user touchscreen device 1201 (see also FIG. 1 , 102 c ).
  • the touchscreen device may include “display-only” cells 1202 that do not have any touch capability (e.g., LCD, LED displays, projection displays, etc.).
  • the touchscreen device may include strategically positioned touch cells, e.g., 1203 , with which a user 1208 may interact using gestures such as those described above in the description with respect to FIGS. 8A-8I .
  • the touch cells may be placed within an ergonomic zone 1204 designed such that an average user would be likely to feel comfortable accessing the ergonomic zone to provide touch gestures into the touch cell.
  • the multi-user touchscreen display may be augmented to provide continuous projection via a display system aligned adjacent to the touchscreen display along a non-parallel (or even non-linear/non-euclidian) plane such as a wall, screen, or structure (e.g., curved surface of a building).
  • a portion of the touch cells within the ergonomic zone may be enabled at any time.
  • a camera may be placed in the vicinity of the touchscreen device, and it may record video of the neighborhood of the touchscreen device. The camera may record users in the neighborhood.
  • the DWE may identify such users in the video, determine the touch cells within the ergonomic zone that are closest to the identified users, and may enable those touch cells alone for the users to provide touchscreen input into the touch cells.
  • the DWE may provide visual indicators of the enabled touch cells so that users may identify easily the touch cells that are enabled.
  • a user may provide touch input into a touch cell, making it an “active” touch cell, e.g., 1205 .
  • the DWE may provide a floating toolbar for the user to access features provided by the DWE for digital whiteboard collaboration.
  • the position of the floating toolbar may automatically be determined by the DWE based on the touch cells that are active, and the coordinate locations of touch input provided by the users into the active and enabled touch cells.
  • FIG. 12B shows a logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE, e.g., a User Whiteboard Ergonomics (“UWE”) component.
  • the DWE may obtain touch input from one or more users into an active and enabled touch cells within a multi-user touchscreen device, e.g., 1211 .
  • the DWE may determine if any users are using touch cells, and identify the number of users, e.g., 1212 .
  • the DWE may select a user, e.g., 1213 , and identify the touch cell being used by that user, e.g., 1214 .
  • the DWE may utilize procedures such as those within the UGI component of FIG. 10 .
  • the DWE may set a coarse position for a floating toolbar within the active touch cell being used by the selected user. Then, the DWE may identify the coordinate position (e.g., [x,y,z]) at which the user is specifically applying input at any time, e.g., 1216 . Based on the coordinate position of the user input, the DWE may set a fine position within the active cell using the coordinate position of the user activity, e.g., 1217 . The DWE may generate a floating toolbar display, e.g., based on the specific user gesture being provided by the user (e.g., as determined using the UGI component of FIG. 10 ), e.g., 1218 .
  • the coordinate position e.g., [x,y,z]
  • the DWE may set a fine position within the active cell using the coordinate position of the user activity, e.g., 1217 .
  • the DWE may generate a floating toolbar display, e.g., based on the specific user gesture
  • the DWE may display the generated floating toolbar at the fine coordinate position set based on the ID number of the active cell and the user touch input location, e.g., 1219 .
  • the DWE may perform this procedure for all user, see e.g., 1220 .
  • FIG. 13 shows a block diagram illustrating example aspects of a DWE controller 1301 .
  • the DWE controller 1301 may serve to aggregate, process, store, search, serve, identify, instruct, generate, match, and/or facilitate interactions with a computer through various technologies, and/or other related data.
  • processors 1303 may be referred to as central processing units (CPU).
  • CPUs 1303 may be referred to as central processing units (CPU).
  • CPUs 1303 may be referred to as central processing units (CPU).
  • CPUs 1303 may be referred to as central processing units (CPU).
  • CPUs use communicative circuits to pass binary encoded signals acting as instructions to enable various operations. These instructions may be operational and/or data instructions containing and/or referencing other instructions and data in various processor accessible and operable areas of memory 1329 (e.g., registers, cache memory, random access memory, etc.).
  • Such communicative instructions may be stored and/or transmitted in batches (e.g., batches of instructions) as programs and/or data components to facilitate desired operations.
  • These stored instruction codes e.g., programs, may engage the CPU circuit components and other motherboard and/or system components to perform desired operations.
  • One type of program is a computer operating system, which, may be executed by CPU on a computer; the operating system enables and facilitates users to access and operate computer information technology and resources.
  • Some resources that may be employed in information technology systems include: input and output mechanisms through which data may pass into and out of a computer; memory storage into which data may be saved; and processors by which information may be processed. These information technology systems may be used to collect data for later retrieval, analysis, and manipulation, which may be facilitated through a database program. These information technology systems provide interfaces that allow users to access and operate various system components.
  • the DWE controller 1301 may be connected to and/or communicate with entities such as, but not limited to: one or more users from user input devices 1311 ; peripheral devices 1312 ; an optional cryptographic processor device 1328 ; and/or a communications network 1313 .
  • the DWE controller 1301 may be connected to and/or communicate with users, e.g., 1333 a , operating client device(s), e.g., 1333 b , including, but not limited to, personal computer(s), server(s) and/or various mobile device(s) including, but not limited to, cellular telephone(s), smartphone(s) (e.g., iPhone®, Blackberry®, Android OS-based phones etc.), tablet computer(s) (e.g., Apple iPadTM, HP SlateTM, Motorola XoomTM, etc.), eBook reader(s) (e.g., Amazon KindleTM, Barnes and Noble's NookTM eReader, etc.), laptop computer(s), notebook(s), netbook(s), gaming console(s) (e.g., XBOX LiveTM, Nintendo® DS, Sony PlayStation® Portable, etc.), portable scanner(s), and/or the like.
  • users e.g., 1333 a
  • operating client device(s) e.g.,
  • Networks are commonly thought to comprise the interconnection and interoperation of clients, servers, and intermediary nodes in a graph topology.
  • server refers generally to a computer, other device, program, or combination thereof that processes and responds to the requests of remote users across a communications network. Servers serve their information to requesting “clients.”
  • client refers generally to a computer, program, other device, user and/or combination thereof that is capable of processing and making requests and obtaining and processing any responses from servers across a communications network.
  • a computer, other device, program, or combination thereof that facilitates, processes information and requests, and/or furthers the passage of information from a source user to a destination user is commonly referred to as a “node.”
  • Networks are generally thought to facilitate the transfer of information from source points to destinations.
  • a node specifically tasked with furthering the passage of information from a source to a destination is commonly called a “router.”
  • There are many forms of networks such as Local Area Networks (LANs), Pico networks, Wide Area Networks (WANs), Wireless Networks (WLANs), etc.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • WLANs Wireless Networks
  • the Internet is generally accepted as being an interconnection of a multitude of networks whereby remote clients and servers may access and interoperate with one another.
  • the DWE controller 1301 may be based on computer systems that may comprise, but are not limited to, components such as: a computer systemization 1302 connected to memory 1329 .
  • a computer systemization 1302 may comprise a clock 1330 , central processing unit (“CPU(s)” and/or “processor(s)” (these terms are used interchangeably throughout the disclosure unless noted to the contrary)) 1303 , a memory 1329 (e.g., a read only memory (ROM) 1306 , a random access memory (RAM) 1305 , etc.), and/or an interface bus 1307 , and most frequently, although not necessarily, are all interconnected and/or communicating through a system bus 1304 on one or more (mother)board(s) 1302 having conductive and/or otherwise transportive circuit pathways through which instructions (e.g., binary encoded signals) may travel to effectuate communications, operations, storage, etc.
  • CPU(s)” and/or “processor(s)” (these terms are used interchangeably throughout the disclosure unless noted to the contrary)) 1303
  • a memory 1329 e.g., a read only memory (ROM) 1306 , a random access memory (RAM) 1305 ,
  • the computer systemization may be connected to a power source 1386 ; e.g., optionally the power source may be internal.
  • a cryptographic processor 1326 and/or transceivers (e.g., ICs) 1374 may be connected to the system bus.
  • the cryptographic processor and/or transceivers may be connected as either internal and/or external peripheral devices 1312 via the interface bus I/O.
  • the transceivers may be connected to antenna(s) 1375 , thereby effectuating wireless transmission and reception of various communication and/or sensor protocols; for example the antenna(s) may connect to: a Texas Instruments WiLink WL1283 transceiver chip (e.g., providing 802.11n, Bluetooth 3.0, FM, global positioning system (GPS) (thereby allowing DWE controller to determine its location)); Broadcom BCM4329FKUBG transceiver chip (e.g., providing 802.11n, Bluetooth 2.1 +EDR, FM, etc.), BCM28150 (HSPA+) and BCM2076 (Bluetooth 4.0, GPS, etc.); a Broadcom BCM47501UB8 receiver chip (e.g., GPS); an Infineon Technologies X-Gold 618-PMB9800 (e.g., providing 2G/3G HSDPA/HSUPA communications); Intel's XMM 7160 (LTE & DC-HSPA), Qualcom's CDMA(2000), Mobile Data/Station
  • the system clock may have a crystal oscillator and generates a base signal through the computer systemization's circuit pathways.
  • the clock may be coupled to the system bus and various clock multipliers that will increase or decrease the base operating frequency for other components interconnected in the computer systemization.
  • the clock and various components in a computer systemization drive signals embodying information throughout the system. Such transmission and reception of instructions embodying information throughout a computer systemization may be referred to as communications. These communicative instructions may further be transmitted, received, and the cause of return and/or reply communications beyond the instant computer systemization to: communications networks, input devices, other computer systemizations, peripheral devices, and/or the like. It should be understood that in alternative embodiments, any of the above components may be connected directly to one another, connected to the CPU, and/or organized in numerous variations employed as exemplified by various computer systems.
  • the CPU comprises at least one high-speed data processor adequate to execute program components for executing user and/or system-generated requests.
  • the processors themselves will incorporate various specialized processing units, such as, but not limited to: floating point units, integer processing units, integrated system (bus) controllers, logic operating units, memory management control units, etc., and even specialized processing sub-units like graphics processing units, digital signal processing units, and/or the like.
  • processors may include internal fast access addressable memory, and be capable of mapping and addressing memory 1329 beyond the processor itself; internal memory may include, but is not limited to: fast registers, various levels of cache memory (e.g., level 1, 2, 3, etc.), RAM, etc.
  • the processor may access this memory through the use of a memory address space that is accessible via instruction address, which the processor can construct and decode allowing it to access a circuit path to a specific memory address space having a memory state/value.
  • the CPU may be a microprocessor such as: AMD's Athlon, Duron and/or Opteron; ARM's classic (e.g., ARM7/9/11), embedded (Coretx-M/R), application (Cortex-A), embedded and secure processors; IBM and/or Motorola's DragonBall and PowerPC; IBM's and Sony's Cell processor; Intel's Atom, Celeron (Mobile), Core (2/Duo/i3/i5/i7), Itanium, Pentium, Xeon, and/or XScale; and/or the like processor(s).
  • the CPU interacts with memory through instruction passing through conductive and/or transportive conduits (e.g., (printed) electronic and/or optic circuits) to execute stored instructions (i.e., program code).
  • instruction passing facilitates communication within the DWE controller and beyond through various interfaces.
  • distributed processors e.g., Distributed DWE
  • mainframe multi-core
  • parallel and/or super-computer architectures
  • smaller mobile devices e.g., smartphones, Personal Digital Assistants (PDAs), etc.
  • PDAs Personal Digital Assistants
  • features of the DWE may be achieved by implementing a microcontroller such as CAST's R8051XC2 microcontroller; Intel's MCS 51 (i.e., 8051 microcontroller); and/or the like.
  • some feature implementations may rely on embedded components, such as: Application-Specific Integrated Circuit (“ASIC”), Digital Signal Processing (“DSP”), Field Programmable Gate Array (“FPGA”), and/or the like embedded technology.
  • ASIC Application-Specific Integrated Circuit
  • DSP Digital Signal Processing
  • FPGA Field Programmable Gate Array
  • any of the DWE component collection (distributed or otherwise) and/or features may be implemented via the microprocessor and/or via embedded components; e.g., via ASIC, coprocessor, DSP, FPGA, and/or the like.
  • some implementations of the DWE may be implemented with embedded components that are configured and used to achieve a variety of features or signal processing.
  • the embedded components may include software solutions, hardware solutions, and/or some combination of both hardware/software solutions.
  • DWE features discussed herein may be achieved through implementing FPGAs, which are a semiconductor devices containing programmable logic components called “logic blocks”, and programmable interconnects, such as the high performance FPGA Virtex series and/or the low cost Spartan series manufactured by Xilinx.
  • Logic blocks and interconnects can be programmed by the customer or designer, after the FPGA is manufactured, to implement any of the DWE features.
  • a hierarchy of programmable interconnects allow logic blocks to be interconnected as needed by the DWE system designer/administrator, somewhat like a one-chip programmable breadboard.
  • An FPGA's logic blocks can be programmed to perform the operation of basic logic gates such as AND, and XOR, or more complex combinational operators such as decoders or simple mathematical operations.
  • the logic blocks also include memory elements, which may be circuit flip-flops or more complete blocks of memory.
  • the DWE may be developed on regular FPGAs and then migrated into a fixed version that more resembles ASIC implementations. Alternate or coordinating implementations may migrate DWE controller features to a final ASIC instead of or in addition to FPGAs.
  • all of the aforementioned embedded components and microprocessors may be considered the “CPU” and/or “processor” for the DWE.
  • the power source 1386 may be of any standard form for powering small electronic circuit board devices such as the following power cells: alkaline, lithium hydride, lithium ion, lithium polymer, nickel cadmium, solar cells, and/or the like. Other types of AC or DC power sources may be used as well. In the case of solar cells, in one embodiment, the case provides an aperture through which the solar cell may capture photonic energy.
  • the power cell 1386 is connected to at least one of the interconnected subsequent components of the DWE thereby providing an electric current to all the interconnected components.
  • the power source 1386 is connected to the system bus component 1304 .
  • an outside power source 1386 is provided through a connection across the I/O 1308 interface. For example, a USB and/or IEEE 1394 connection carries both data and power across the connection and is therefore a suitable source of power.
  • Interface bus(ses) 1307 may accept, connect, and/or communicate to a number of interface adapters, frequently, although not necessarily in the form of adapter cards, such as but not limited to: input output interfaces (I/O) 1308 , storage interfaces 1309 , network interfaces 1310 , and/or the like.
  • cryptographic processor interfaces 1327 similarly may be connected to the interface bus.
  • the interface bus provides for the communications of interface adapters with one another as well as with other components of the computer systemization. Interface adapters are adapted for a compatible interface bus. Interface adapters may connect to the interface bus via expansion and/or slot architecture.
  • expansion and/or slot architectures may be employed, such as, but not limited to: Accelerated Graphics Port (AGP), Card Bus, ExpressCard, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), Thunderbolt, and/or the like.
  • AGP Accelerated Graphics Port
  • Card Bus ExpressCard
  • ExpressCard Extended Industry Standard Architecture
  • MCA Micro Channel Architecture
  • NuBus NuBus
  • PCI(X) Peripheral Component Interconnect
  • PCI Express Personal Computer Memory Card International Association
  • PCMCIA Personal Computer Memory Card International Association
  • Thunderbolt Thunderbolt, and/or the like.
  • Storage interfaces 1309 may accept, communicate, and/or connect to a number of storage devices such as, but not limited to: storage devices 1314 , removable disc devices, and/or the like.
  • Storage interfaces may employ connection protocols such as, but not limited to: (Ultra) (Serial) Advanced Technology Attachment (Packet Interface) ((Ultra) (Serial) ATA(PI)), (Enhanced) Integrated Drive Electronics ((E)IDE), Institute of Electrical and Electronics Engineers (IEEE) 1394, Ethernet, fiber channel, Small Computer Systems Interface (SCSI), Thunderbolt, Universal Serial Bus (USB), and/or the like.
  • connection protocols such as, but not limited to: (Ultra) (Serial) Advanced Technology Attachment (Packet Interface) ((Ultra) (Serial) ATA(PI)), (Enhanced) Integrated Drive Electronics ((E)IDE), Institute of Electrical and Electronics Engineers (IEEE) 1394, Ethernet, fiber channel, Small Computer Systems Interface (SCSI), Thunderbolt, Universal
  • Network interfaces 1310 may accept, communicate, and/or connect to a communications network 1313 .
  • the DWE controller is accessible through remote clients 1333 b (e.g., computers with web browsers) by users 1333 a .
  • Network interfaces may employ connection protocols such as, but not limited to: direct connect, Ethernet (thick, thin, twisted pair 10/100/1000 Base T, and/or the like), Token Ring, wireless connection such as IEEE 802.11a-x, and/or the like.
  • connection protocols such as, but not limited to: direct connect, Ethernet (thick, thin, twisted pair 10/100/1000 Base T, and/or the like), Token Ring, wireless connection such as IEEE 802.11a-x, and/or the like.
  • distributed network controllers e.g., Distributed DWE
  • architectures may similarly be employed to pool, load balance, and/or otherwise increase the communicative bandwidth required by the DWE controller.
  • a communications network may be any one and/or the combination of the following: a direct interconnection; the Internet; a Local Area Network (LAN); a Metropolitan Area Network (MAN); an Operating Missions as Nodes on the Internet (OMNI); a secured custom connection; a Wide Area Network (WAN); a wireless network (e.g., employing protocols such as, but not limited to a Wireless Application Protocol (WAP), I-mode, and/or the like); and/or the like.
  • a network interface may be regarded as a specialized form of an input output interface.
  • multiple network interfaces 1310 may be used to engage with various communications network types 1313 . For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and/or unicast networks.
  • I/O 1308 may accept, communicate, and/or connect to user input devices 1311 , peripheral devices 1312 , cryptographic processor devices 1328 , and/or the like.
  • I/O may employ connection protocols such as, but not limited to: audio: analog, digital, monaural, RCA, stereo, and/or the like; data: Apple Desktop Bus (ADB), Bluetooth, IEEE 1394a-b, serial, universal serial bus (USB); infrared; joystick; keyboard; midi; optical; PC AT; PS/2; parallel; radio; video interface: Apple Desktop Connector (ADC), BNC, coaxial, component, composite, digital, DisplayPort, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), RCA, RF antennae, S-Video, VGA, and/or the like; wireless transceivers: 802.11a/b/g/n/x; Bluetooth; cellular (e.g., code division multiple access (CDMA), high speed packet access (HSPA(+)), high-speed down
  • One output device may be a video display, which may take the form of a Cathode Ray Tube (CRT), Liquid Crystal Display (LCD), Light Emitting Diode (LED), Organic Light Emitting Diode (OLED), Plasma, and/or the like based monitor with an interface (e.g., VGA, DVI circuitry and cable) that accepts signals from a video interface.
  • the video interface composites information generated by a computer systemization and generates video signals based on the composited information in a video memory frame.
  • Another output device is a television set, which accepts signals from a video interface.
  • the video interface provides the composited video information through a video connection interface that accepts a video display interface (e.g., an RCA composite video connector accepting an RCA composite video cable; a DVI connector accepting a DVI display cable, HDMI, etc.).
  • a video display interface e.g., an RCA composite video connector accepting an RCA composite video cable; a DVI connector accepting a DVI display cable, HDMI, etc.
  • User input devices 1311 often are a type of peripheral device 1312 (see below) and may include: card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, microphones, mouse (mice), remote controls, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors (e.g., accelerometers, ambient light, GPS, gyroscopes, proximity, etc.), styluses, and/or the like.
  • peripheral device 1312 may include: card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, microphones, mouse (mice), remote controls, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors (e.g., accelerometers, ambient light, GPS, gyroscopes, proximity, etc.), styluses, and/or the like.
  • Peripheral devices 1312 may be connected and/or communicate to I/O and/or other facilities of the like such as network interfaces, storage interfaces, directly to the interface bus, system bus, the CPU, and/or the like. Peripheral devices may be external, internal and/or part of the DWE controller.
  • Peripheral devices may include: antenna, audio devices (e.g., line-in, line-out, microphone input, speakers, etc.), cameras (e.g., still, video, webcam, etc.), dongles (e.g., for copy protection, ensuring secure transactions with a digital signature, and/or the like), external processors (for added capabilities; e.g., crypto devices 1328 ), force-feedback devices (e.g., vibrating motors), near field communication (NFC) devices, network interfaces, printers, radio frequency identifiers (RFIDs), scanners, storage devices, transceivers (e.g., cellular, GPS, etc.), video devices (e.g., goggles, monitors, etc.), video sources, visors, and/or the like. Peripheral devices often include types of input devices (e.g., microphones, cameras, etc.).
  • audio devices e.g., line-in, line-out, microphone input, speakers, etc.
  • cameras e.
  • the DWE controller may be embodied as an embedded, dedicated, and/or monitor-less (i.e., headless) device, wherein access would be provided over a network interface connection.
  • Cryptographic units such as, but not limited to, microcontrollers, processors 1326 , interfaces 1327 , and/or devices 1328 may be attached, and/or communicate with the DWE controller.
  • a MC68HC16 microcontroller manufactured by Motorola Inc., may be used for and/or within cryptographic units.
  • the MC68HC16 microcontroller utilizes a 16-bit multiply-and-accumulate instruction in the 16 MHz configuration and requires less than one second to perform a 512-bit RSA private key operation.
  • Cryptographic units support the authentication of communications from interacting agents, as well as allowing for anonymous transactions.
  • Cryptographic units may also be configured as part of the CPU. Equivalent microcontrollers and/or processors may also be used.
  • Typical commercially available specialized cryptographic processors include: the Broadcom's CryptoNetX and other Security Processors; nCipher's nShield (e.g., Solo, Connect, etc.), SafeNet's Luna PCI (e.g., 7100) series; Semaphore Communications' 40 MHz Roadrunner 184; sMIP's (e.g., 208956); Sun's Cryptographic Accelerators (e.g., Accelerator 6000 PCIe Board, Accelerator 500 Daughtercard); Via Nano Processor (e.g., L2100, L2200, U2400) line, which is capable of performing 500+MB/s of cryptographic instructions; VLSI Technology's 33 MHz 6868; and/or the like.
  • nCipher's nShield e.g., Solo, Connect, etc.
  • SafeNet's Luna PCI e.g., 7100
  • Semaphore Communications' 40 MHz Roadrunner 184
  • any mechanization and/or embodiment allowing a processor to affect the storage and/or retrieval of information is regarded as memory 1329 .
  • memory is a fungible technology and resource, thus, any number of memory embodiments may be employed in lieu of or in concert with one another.
  • the DWE controller and/or a computer systemization may employ various forms of memory 1329 .
  • a computer systemization may be configured wherein the operation of on-chip CPU memory (e.g., registers), RAM, ROM, and any other storage devices are provided by a paper punch tape or paper punch card mechanism; however, such an embodiment would result in an extremely slow rate of operation.
  • memory 1329 may include ROM 1306 , RAM 1305 , and a storage device 1314 .
  • a storage device 1314 may employ any number of computer storage devices/systems. Storage devices may include a drum; a (fixed and/or removable) magnetic disk drive; a magneto-optical drive; an optical drive (i.e., Blueray, CD ROM/RAM/Recordable (R)/ReWritable (RW), DVD R/RW, HD DVD R/RW etc.); an array of devices (e.g., Redundant Array of Independent Disks (RAID)); solid state memory devices (USB memory, solid state drives (SSD), etc.); other processor-readable storage mediums; and/or other devices of the like.
  • a computer systemization generally requires and makes use of memory.
  • the memory 1329 may contain a collection of program and/or database components and/or data such as, but not limited to: operating system component(s) 1315 (operating system); information server component(s) 1316 (information server); user interface component(s) 1317 (user interface); Web browser component(s) 1318 (Web browser); database(s) 1319 ; mail server component(s) 1321 ; mail client component(s) 1322 ; cryptographic server component(s) 1320 (cryptographic server); the DWE component(s) 1335 ; and/or the like (i.e., collectively a component collection). These components may be stored and accessed from the storage devices and/or from storage devices accessible through an interface bus.
  • operating system component(s) 1315 operating system
  • information server component(s) 1316 information server
  • user interface component(s) 1317 user interface
  • Web browser component(s) 1318 Web browser
  • database(s) 1319 ; mail server component(s) 1321 ; mail client component(s) 1322 ; cryptographic server component
  • non-conventional program components such as those in the component collection may be stored in a local storage device 1314 , they may also be loaded and/or stored in memory such as: peripheral devices, RAM, remote storage facilities through a communications network, ROM, various forms of memory, and/or the like.
  • the operating system component 1315 is an executable program component facilitating the operation of the DWE controller.
  • the operating system may facilitate access of I/O, network interfaces, peripheral devices, storage devices, and/or the like.
  • the operating system may be a highly fault tolerant, scalable, and secure system such as: Apple Macintosh OS X (Server); AT&T Plan 9; Be OS; Unix and Unix-like system distributions (such as AT&T's UNIX; Berkley Software Distribution (BSD) variations such as FreeBSD, NetBSD, OpenBSD, and/or the like; Linux distributions such as Red Hat, Ubuntu, and/or the like); and/or the like operating systems.
  • Apple Macintosh OS X Server
  • AT&T Plan 9 Be OS
  • Unix and Unix-like system distributions such as AT&T's UNIX
  • Berkley Software Distribution (BSD) variations such as FreeBSD, NetBSD, OpenBSD, and/or the like
  • Linux distributions such as Red
  • more limited and/or less secure operating systems also may be employed such as Apple Macintosh OS, IBM OS/2, Microsoft DOS, Microsoft Windows 2000/2003/3.1/95/98/CE/Millenium/NT/Vista/XP (Server), Palm OS, and/or the like.
  • emobile operating systems such as Apple's iOS, Google's Android, Hewlett Packard's WebOS, Microsofts Windows Mobile, and/or the like may be employed. Any of these operating systems may be embedded within the hardware of the NICK controller, and/or stored/loaded into memory/storage.
  • An operating system may communicate to and/or with other components in a component collection, including itself, and/or the like.
  • the operating system communicates with other program components, user interfaces, and/or the like.
  • the operating system may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • the operating system once executed by the CPU, may enable the interaction with communications networks, data, I/O, peripheral devices, program components, memory, user input devices, and/or the like.
  • the operating system may provide communications protocols that allow the DWE controller to communicate with other entities through a communications network 1313 .
  • Various communication protocols may be used by the DWE controller as a subcarrier transport mechanism for interaction, such as, but not limited to: multicast, TCP/IP, UDP, unicast, and/or the like.
  • An information server component 1316 is a stored program component that is executed by a CPU.
  • the information server may be an Internet information server such as, but not limited to Apache Software Foundation's Apache, Microsoft's Internet Information Server, and/or the like.
  • the information server may allow for the execution of program components through facilities such as Active Server Page (ASP), ActiveX, (ANSI) (Objective-) C (++), C# and/or .NET, Common Gateway Interface (CGI) scripts, dynamic (D) hypertext markup language (HTML), FLASH, Java, JavaScript, Practical Extraction Report Language (PERL), Hypertext Pre-Processor (PHP), pipes, Python, wireless application protocol (WAP), WebObjects, and/or the like.
  • the information server may support secure communications protocols such as, but not limited to, File Transfer Protocol (FTP); HyperText Transfer Protocol (HTTP); Secure Hypertext Transfer Protocol (HTTPS), Secure Socket Layer (SSL), messaging protocols (e.g., America Online (AOL) Instant Messenger (AIM), Apple's iMessage, Application Exchange (APEX), ICQ, Internet Relay Chat (IRC), Microsoft Network (MSN) Messenger Service, Presence and Instant Messaging Protocol (PRIM), Internet Engineering Task Force's (IETF's) Session Initiation Protocol (SIP), SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE), open XML-based Extensible Messaging and Presence Protocol (XMPP) (i.e., Jabber or Open Mobile Alliance's (OMA's) Instant Messaging and Presence Service (IMPS)), Yahoo!
  • FTP File Transfer Protocol
  • HTTP HyperText Transfer Protocol
  • HTTPS Secure Hypertext Transfer Protocol
  • SSL Secure Socket Layer
  • messaging protocols e.
  • the information server provides results in the form of Web pages to Web browsers, and allows for the manipulated generation of the Web pages through interaction with other program components.
  • DNS Domain Name System
  • a request such as http://123.124.125.126/myInformation.html might have the IP portion of the request “123.124.125.126” resolved by a DNS server to an information server at that IP address; that information server might in turn further parse the http request for the “/myInformation.html” portion of the request and resolve it to a location in memory containing the information “myInformation.html.”
  • other information serving protocols may be employed across various ports, e.g., FTP communications across port 21, and/or the like.
  • An information server may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the information server communicates with the DWE database 1319 , operating systems, other program components, user interfaces, Web browsers, and/or the like.
  • Access to the DWE database may be achieved through a number of database bridge mechanisms such as through scripting languages as enumerated below (e.g., CGI) and through inter-application communication channels as enumerated below (e.g., CORBA, WebObjects, etc.). Any data requests through a Web browser are parsed through the bridge mechanism into appropriate grammars as required by the DWE.
  • the information server would provide a Web form accessible by a Web browser. Entries made into supplied fields in the Web form are tagged as having been entered into the particular fields, and parsed as such. The entered terms are then passed along with the field tags, which act to instruct the parser to generate queries directed to appropriate tables and/or fields.
  • the parser may generate queries in standard SQL by instantiating a search string with the proper join/select commands based on the tagged text entries, wherein the resulting command is provided over the bridge mechanism to the DWE as a query.
  • the results are passed over the bridge mechanism, and may be parsed for formatting and generation of a new results Web page by the bridge mechanism. Such a new results Web page is then provided to the information server, which may supply it to the requesting Web browser.
  • an information server may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • Computer interfaces in some respects are similar to automobile operation interfaces.
  • Automobile operation interface elements such as steering wheels, gearshifts, and speedometers facilitate the access, operation, and display of automobile resources, and status.
  • Computer interaction interface elements such as check boxes, cursors, menus, scrollers, and windows (collectively and commonly referred to as widgets) similarly facilitate the access, capabilities, operation, and display of data and computer hardware and operating system resources, and status. Operation interfaces are commonly called user interfaces.
  • GUIs Graphical user interfaces
  • GUIs such as the Apple Macintosh Operating System's Aqua and iOS's Cocoa Touch, IBM's OS/2, Google's Android Mobile UI, Microsoft's Windows 2000/2003/3.1/95/98/CE/Millenium/Mobile/NT/XP/Vista/7/8 (i.e., Aero, Metro), Unix's X-Windows (e.g., which may include additional Unix graphic interface libraries and layers such as K Desktop Environment (KDE), mythTV and GNU Network Object Model Environment (GNOME)), web interface libraries (e.g., ActiveX, AJAX, (D)HTML, FLASH, Java, JavaScript, etc.
  • KDE K Desktop Environment
  • GNOME GNU Network Object Model Environment
  • web interface libraries e.g., ActiveX, AJAX, (D)HTML, FLASH, Java, JavaScript, etc.
  • interface libraries such as, but not limited to, Dojo, jQuery(UI), MooTools, Prototype, script.aculo.us, SWFObject, Yahoo! User Interface, any of which may be used and) provide a baseline and means of accessing and displaying information graphically to users.
  • a user interface component 1317 is a stored program component that is executed by a CPU.
  • the user interface may be a graphic user interface as provided by, with, and/or atop operating systems and/or operating environments such as already discussed.
  • the user interface may allow for the display, execution, interaction, manipulation, and/or operation of program components and/or system facilities through textual and/or graphical facilities.
  • the user interface provides a facility through which users may affect, interact, and/or operate a computer system.
  • a user interface may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the user interface communicates with operating systems, other program components, and/or the like.
  • the user interface may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • a Web browser component 1318 is a stored program component that is executed by a CPU.
  • the Web browser may be a hypertext viewing application such as Goofle's (Mobile) Chrome, Microsoft Internet Explorer, Netscape Navigator, Apple's (Mobile) Safari, embedded web browser objects such as through Apple's Cocoa (Touch) object class, and/or the like.
  • Secure Web browsing may be supplied with 128 bit (or greater) encryption by way of HTTPS, SSL, and/or the like.
  • Web browsers allowing for the execution of program components through facilities such as ActiveX, AJAX, (D)HTML, FLASH, Java, JavaScript, web browser plug-in APIs (e.g., Chrome, FireFox, Internet Explorer, Safari Plug-in, and/or the like APIs), and/or the like.
  • Web browsers and like information access tools may be integrated into PDAs, cellular telephones, smartphones, and/or other mobile devices.
  • a Web browser may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the Web browser communicates with information servers, operating systems, integrated program components (e.g., plug-ins), and/or the like; e.g., it may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • a combined application may be developed to perform similar operations of both. The combined application would similarly effect the obtaining and the provision of information to users, user agents, and/or the like from the DWE equipped nodes.
  • the combined application may be nugatory on systems employing standard Web browsers.
  • a mail server component 1321 is a stored program component that is executed by a CPU 1303 .
  • the mail server may be an Internet mail server such as, but not limited to Apple's Mail Server (3), dovect, sendmail, Microsoft Exchange, and/or the like.
  • the mail server may allow for the execution of program components through facilities such as ASP, ActiveX, (ANSI) (Objective-) C (++), C# and/or .NET, CGI scripts, Java, JavaScript, PERL, PHP, pipes, Python, WebObjects, and/or the like.
  • the mail server may support communications protocols such as, but not limited to: Internet message access protocol (IMAP), Messaging Application Programming Interface (MAPI)/Microsoft Exchange, post office protocol (POP3), simple mail transfer protocol (SMTP), and/or the like.
  • IMAP Internet message access protocol
  • MAPI Messaging Application Programming Interface
  • PMP3 post office protocol
  • SMTP simple mail transfer protocol
  • the mail server can route, forward, and process incoming and outgoing mail messages that have been sent, relayed and/or otherwise traversing through and/or to the DWE.
  • Access to the DWE mail may be achieved through a number of APIs offered by the individual Web server components and/or the operating system.
  • a mail server may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, information, and/or responses.
  • a mail client component 1322 is a stored program component that is executed by a CPU 1303 .
  • the mail client may be a mail viewing application such as Apple (Mobile) Mail, Microsoft Entourage, Microsoft Outlook, Microsoft Outlook Express, Mozilla, Thunderbird, and/or the like.
  • Mail clients may support a number of transfer protocols, such as: IMAP, Microsoft Exchange, POP3, SMTP, and/or the like.
  • a mail client may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like.
  • the mail client communicates with mail servers, operating systems, other mail clients, and/or the like; e.g., it may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, information, and/or responses.
  • the mail client provides a facility to compose and transmit electronic mail messages.
  • a cryptographic server component 1320 is a stored program component that is executed by a CPU 1303 , cryptographic processor 1326 , cryptographic processor interface 1327 , cryptographic processor device 1328 , and/or the like.
  • Cryptographic processor interfaces will allow for expedition of encryption and/or decryption requests by the cryptographic component; however, the cryptographic component, alternatively, may run on a CPU.
  • the cryptographic component allows for the encryption and/or decryption of provided data.
  • the cryptographic component allows for both symmetric and asymmetric (e.g., Pretty Good Protection (PGP)) encryption and/or decryption.
  • PGP Pretty Good Protection
  • the cryptographic component may employ cryptographic techniques such as, but not limited to: digital certificates (e.g., X.509 authentication framework), digital signatures, dual signatures, enveloping, password access protection, public key management, and/or the like.
  • the cryptographic component will facilitate numerous (encryption and/or decryption) security protocols such as, but not limited to: checksum, Data Encryption Standard (DES), Elliptical Curve Encryption (ECC), International Data Encryption Algorithm (IDEA), Message Digest 5 (MD5, which is a one way hash operation), passwords, Rivest Cipher (RC5), Rijndael, RSA (which is an Internet encryption and authentication system that uses an algorithm developed in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman), Secure Hash Algorithm (SHA), Secure Socket Layer (SSL), Secure Hypertext Transfer Protocol (HTTPS), and/or the like.
  • digital certificates e.g., X.509 authentication
  • the DWE may encrypt all incoming and/or outgoing communications and may serve as node within a virtual private network (VPN) with a wider communications network.
  • the cryptographic component facilitates the process of “security authorization” whereby access to a resource is inhibited by a security protocol wherein the cryptographic component effects authorized access to the secured resource.
  • the cryptographic component may provide unique identifiers of content, e.g., employing and MD5 hash to obtain a unique signature for an digital audio file.
  • a cryptographic component may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like.
  • the cryptographic component supports encryption schemes allowing for the secure transmission of information across a communications network to enable the DWE component to engage in secure transactions if so desired.
  • the cryptographic component facilitates the secure accessing of resources on the DWE and facilitates the access of secured resources on remote systems; i.e., it may act as a client and/or server of secured resources.
  • the cryptographic component communicates with information servers, operating systems, other program components, and/or the like.
  • the cryptographic component may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • the DWE database component 1319 may be embodied in a database and its stored data.
  • the database is a stored program component, which is executed by the CPU; the stored program component portion configuring the CPU to process the stored data.
  • the database may be any of a number of fault tolerant, relational, scalable, secure databases, such as DB2, MySQL, Oracle, Sybase, and/or the like.
  • Relational databases are an extension of a flat file. Relational databases consist of a series of related tables. The tables are interconnected via a key field. Use of the key field allows the combination of the tables by indexing against the key field; i.e., the key fields act as dimensional pivot points for combining information from various tables. Relationships generally identify links maintained between tables by matching primary keys. Primary keys represent fields that uniquely identify the rows of a table in a relational database. More precisely, they uniquely identify rows of a table on the “one” side of a one-to-many relationship.
  • the DWE database may be implemented using various standard data-structures, such as an array, hash, (linked) list, struct, structured text file (e.g., XML), table, and/or the like. Such data-structures may be stored in memory and/or in (structured) files.
  • an object-oriented database may be used, such as Frontier, ObjectStore, Poet, Zope, and/or the like.
  • Object databases can include a number of object collections that are grouped and/or linked together by common attributes; they may be related to other object collections by some common attributes. Object-oriented databases perform similarly to relational databases with the exception that objects are not just pieces of data but may have other types of capabilities encapsulated within a given object.
  • the DWE database is implemented as a data-structure
  • the use of the DWE database 1319 may be integrated into another component such as the DWE component 1335 .
  • the database may be implemented as a mix of data structures, objects, and relational structures. Databases may be consolidated and/or distributed in countless variations through standard data processing techniques. Portions of databases, e.g., tables, may be exported and/or imported and thus decentralized and/or integrated.
  • the database component 1319 includes several tables 1319 a - l .
  • a Users table 1319 a may include fields such as, but not limited to: user_ID, firs_t name, last_name, middle_name, suffix, prefix, device_ID_list, device_name_list, device_typelist, hardware_configuration_list, software_apps_list, device_MAC_list, device_preferences_list, and/or the like.
  • the Users table may support and/or track multiple entity accounts on a DWE.
  • a Clients table 1319 b may include fields such as, but not limited to: device_ID_list, device_name_list, device_typelist, hardware_configuration_list, software_apps_list, device_IP_list, device_MAC_list, device_preferences_list, and/or the like.
  • a Objects table 1319 c may include fields such as, but not limited to: size_pixels, resolution, scaling, x_position, y_position, height, width, shadow_flag, 3D_effect_flag, alpha, brightness, contrast, saturation, gamma, transparency, overlap, boundary_margin, rotation_angle, revolution_angle, and/or the like.
  • An Apps table 1319 d may include fields such as, but not limited to: app_name, app_id, app_version, app_software_requirements_list, app_hardware_requirements_list, and/or the like.
  • a Gestures table 1319 e may include fields such as, but not limited to: gesture_name, gesture_type, assoc_code_module, num_users, num_inputs, velocity_threshold_list, acceleration_threshold_list, pressure_threshold_list, and/or the like.
  • a Physics Models table 1319 f may include fields such as, but not limited to: acceleration, velocity, direction_x, direction_y, orientation_theta, orientation_phi, object_mass, friction_coefficient_x, friction_coefficient_y, friction_coefficient_theta, friction_coefficient_phi, object_elasticity, restitution_percent, terminal_velocity, center_of_mass, moment_inertia, relativistic_flag, newtonian_flag, collision_type, dissipation_factor, and/or the like.
  • a Viewports table 1319 g may include fields such as, but not limited to: user_id, client_id, viewport_shape, viewport_x, viewport_y, viewport_size_list, and/or the like.
  • a Whiteboards table 1319 h may include fields such as, but not limited to: whiteboard_id, whiteboard_name, whiteboard_team_list, whiteboard_directory, and/or the like.
  • An Object Contexts table 1319 i may include fields such as, but not limited to: object_id, object_type, system_settings_flag, object_menu_XML, and/or the like.
  • a System Contexts table 1319 j may include fields such as, but not limited to: object_type, system_settings_flag, system_menu_XML, and/or the like.
  • a Remote Window Contents table 1319 k may include fields such as, but not limited to: window_id, window_link, window_refresh_trigger, and/or the like.
  • a Market Data table 13191 may include fields such as, but not limited to: market_data_feed_ID, asset_ID, asset_symbol, asset_name, spot_price, bid_price, ask_price, and/or the like; in one embodiment, the market data table is populated through a market data feed (e.g., Bloomberg's PhatPipe, Dun & Bradstreet, Reuter's Tib, Triarch, etc.), for example, through Microsoft's Active Template Library and Dealing Object Technology's real-time toolkit Rtt.Multi.
  • a market data feed e.g., Bloomberg's PhatPipe, Dun & Bradstreet, Reuter's Tib, Triarch, etc.
  • the DWE database may interact with other database systems. For example, employing a distributed database system, queries and data access by search DWE component may treat the combination of the DWE database, an integrated data security layer database as a single database entity.
  • user programs may contain various user interface primitives, which may serve to update the DWE.
  • various accounts may require custom database tables depending upon the environments and the types of clients the DWE may need to serve. It should be noted that any unique fields may be designated as a key field throughout.
  • these tables have been decentralized into their own databases and their respective database controllers (i.e., individual database controllers for each of the above tables). Employing standard data processing techniques, one may further distribute the databases over several computer systemizations and/or storage devices. Similarly, configurations of the decentralized database controllers may be varied by consolidating and/or distributing the various database components 1319 a - l .
  • the DWE may be configured to keep track of various settings, inputs, and parameters via database controllers.
  • the DWE database may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the DWE database communicates with the DWE component, other program components, and/or the like. The database may contain, retain, and provide information regarding other nodes and data.
  • the DWE component 1335 is a stored program component that is executed by a CPU.
  • the DWE component incorporates any and/or all combinations of the aspects of the DWE discussed in the previous figures.
  • the DWE affects accessing, obtaining and the provision of information, services, transactions, and/or the like across various communications networks.
  • the features and embodiments of the DWE discussed herein increase network efficiency by reducing data transfer requirements the use of more efficient data structures and mechanisms for their transfer and storage. As a consequence, more data may be transferred in less time, and latencies with regard to transactions, are also reduced.
  • the DWE In many cases, such reduction in storage, transfer time, bandwidth requirements, latencies, etc., will reduce the capacity and structural infrastructure requirements to support the DWE's features and facilities, and in many cases reduce the costs, energy consumption/requirements, and extend the life of DWE's underlying infrastructure; this has the added benefit of making the DWE more reliable.
  • many of the features and mechanisms are designed to be easier for users to use and access, thereby broadening the audience that may enjoy/employ and exploit the feature sets of the DWE; such ease of use also helps to increase the reliability of the DWE.
  • the feature sets include heightened security as noted via the Cryptographic components 1320 , 1326 , 1328 and throughout, making access to the features and data more reliable and secure.
  • the DWE component may transform user multi-element touchscreen gestures via DWE components into updated digital collaboration whiteboard objects, and/or the like and use of the DWE.
  • the DWE component 1335 takes inputs (e.g., collaborate request input 211 , authentication response 215 , tile objects data 220 , whiteboard input 611 , user whiteboard session object 616 , user instruction lookup response 619 , tile objects data 622 , affected clients data 627 , user input raw data 1001 , object-specified context instructions 1014 , system context interpretation instructions 1013 , and/or the like) etc., and transforms the inputs via various components (e.g., WCSI 1241 , CVS 1242 , VCG 1243 , UCW 1244 , UGI 1245 , UWE 1246 ; and/or the like), into outputs (e.g., collaborator acknowledgment 216 , user whiteboard session object 222 , whiteboard session details 224 , updated tile objects data 630 , updated user whiteboard session details
  • the DWE component enabling access of information between nodes may be developed by employing standard development tools and languages such as, but not limited to: Apache components, Assembly, ActiveX, binary executables, (ANSI) (Objective-) C (++), C# and/or .NET, database adapters, CGI scripts, Java, JavaScript, mapping tools, procedural and object oriented development tools, PERL, PHP, Python, shell scripts, SQL commands, web application server extensions, web development environments and libraries (e.g., Microsoft's ActiveX; Adobe AIR, FLEX & FLASH; AJAX; (D)HTML; Dojo, Java; JavaScript; jQuery(UI); MooTools; Prototype; script.aculo.us; Simple Object Access Protocol (SOAP); SWFObject; Yahoo!
  • Apache components Assembly, ActiveX, binary executables, (ANSI) (Objective-) C (++), C# and/or .NET
  • database adapters CGI scripts
  • Java JavaScript
  • mapping tools procedural and object
  • the DWE server employs a cryptographic server to encrypt and decrypt communications.
  • the DWE component may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the DWE component communicates with the DWE database, operating systems, other program components, and/or the like.
  • the DWE may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • any of the DWE node controller components may be combined, consolidated, and/or distributed in any number of ways to facilitate development and/or deployment.
  • the component collection may be combined in any number of ways to facilitate deployment and/or development. To accomplish this, one may integrate the components into a common code base or in a facility that can dynamically load the components on demand in an integrated fashion.
  • the component collection may be consolidated and/or distributed in countless variations through standard data processing and/or development techniques. Multiple instances of any one of the program components in the program component collection may be instantiated on a single node, and/or across numerous nodes to improve performance through load-balancing and/or data-processing techniques. Furthermore, single instances may also be distributed across multiple controllers and/or storage devices; e.g., databases. All program component instances and controllers working in concert may do so through standard data processing communication techniques.
  • the configuration of the DWE controller will depend on the context of system deployment. Factors such as, but not limited to, the budget, capacity, location, and/or use of the underlying hardware resources may affect deployment requirements and configuration. Regardless of if the configuration results in more consolidated and/or integrated program components, results in a more distributed series of program components, and/or results in some combination between a consolidated and distributed configuration, data may be communicated, obtained, and/or provided. Instances of components consolidated into a common code base from the program component collection may communicate, obtain, and/or provide data. This may be accomplished through intra-application data processing communication techniques such as, but not limited to: data referencing (e.g., pointers), internal messaging, object instance variable communication, shared memory space, variable passing, and/or the like.
  • data referencing e.g., pointers
  • internal messaging e.g., object instance variable communication, shared memory space, variable passing, and/or the like.
  • API Application Program Interfaces
  • DCOM Component Object Model
  • D Distributed
  • SOAP SOAP
  • a grammar may be developed by using development tools such as lex, yacc, XML, and/or the like, which allow for grammar generation and parsing capabilities, which in turn may form the basis of communication messages within and between components.
  • a grammar may be arranged to recognize the tokens of an HTTP post command, e.g.:
  • the DWE controller may be executing a PHP script implementing a Secure Sockets Layer (“SSL”) socket server via the information server, which listens to incoming communications on a server port to which a client may send data, e.g., data encoded in JSON format.
  • the PHP script may read the incoming message from the client device, parse the received JSON-encoded text data to extract information from the JSON-encoded text data into PHP script variables, and store the data (e.g., client identifying information, etc.) and/or extracted information in a relational database accessible using the Structured Query Language (“SQL”).
  • SQL Structured Query Language
  • FIGS. 14-65 are described with respect to an embodiment referred to herein as “Thoughtstream”. There are two modes of operation in this embodiment: Record and retrieve. If the system is not retrieving then it is recording, even if it is recording nothing. All information, media and ideas are stored in perpetuity.
  • FIG. 14 is a drawing of a calculator. Normally, a calculator is loaded into your computer's memory, you interact with it by clicking buttons, and then you might copy and paste the results into a document. When you are done, you close the application, and the application is removed from memory. In the present embodiment, however, the calculator is never unloaded from memory. The functions, the button presses, the calculations, are all saved with a time index. Even if the calculator is “closed”, or removed from the system, the application and all of its history persist in the cloud.
  • ThoughtStream is a cloud-based application platform.
  • the SuperWall is a ThoughtStream Application viewer, and is also a special, turbocharged ThoughtStream Application.
  • ThoughtStream Applications are Inherently multi-user, persistent, and device independent.
  • Google Docs revolutionized office productivity by giving people the ability to externalize information in a way that had never been done before.
  • Google Does doesn't store the history of a document, but it does store the STATE of a document. This is a perfect expression of the spirit and promise of the web applied to the idea of applications instead of “web pages”.
  • the SuperWall has more in common with an operating system than a drawing tool. It's more like Windows than Word. It's more like a browser than a website.
  • FIG. 15 illustrates a traditional Desktop Application User Interface Paradigm: Windows are containers for application content.
  • the operating system provides methods to manipulate, close, open, resize, etc., the window containers themselves. This technique works great for Mouse-based interfaces on desktops or laptops, but not so well for large-format touch-based user interfaces, particularly where the content window can be virtually any size. For ergonomic (and aesthetic) reasons, we don't always want to see the menu or title bar. On a large display where you may have hundreds or thousands of objects at once, it will get very busy and confusing.
  • “objects” have no boundaries whatsoever. They can be as large or small as the user wishes, and they can be moved freely throughout the space. “Regions” also have no boundaries but they cannot overlap one another. In any given vertical line, there can only be one Region. This is because humans are vertical-standing creatures. “Toolbars” are strictly limited to the vertical space we call the “Ergonomic Zone”. They are in front of regions and objects and cannot be occluded by other toolbars. Now, no matter what configuration of Objects, we have ergonomically accessible toolbars.
  • Rule 1 is that in default state, single point swipes cause movement. Swiping the canvas will pan (if there are no Active Regions!).
  • a canvas toolbar is what appears when a user touches the canvas: any space that is not occupied by an Object or Region. See FIG. 22 .
  • touching the Cloud button opens up the Cloud Pane.
  • the Canvas Toolbar stays in the same position, but it scales up and transforms into the Cloud Pane.
  • the upper limit of the pane must remain within the upper Ergonomic Boundary.
  • the Cloud Pane allows users to select apps, documents, images, or other objects from a hierarchical display module. This set of objects can also be administered through an on-line component. Perhaps drop box, sugarsync, or another solution. Touching an item once will cause it to appear on the canvas as a new object.
  • Toolbars make up the majority of the user interface for Thought Stream. They provide a way to get to virtually all functionality within the system, and they are entirely dynamic entities. The reason for this is that we can create a dynamic user interface that is not cluttered with hundreds of buttons and widgets. We can create a system where only the necessary elements are visible at any given time. Coupled with the Region, virtually any task can be accomplished. Single user devices only need one toolbar, which does not necessarily leave the screen. In fact, it could be designed in a way that is similar to the Mac OS, wherein the toolbar at the top of the screen is there, but it changes depending on which window has been given focus. The paradigm is similar here as well.
  • Toolbars “travel” Across the SuperSpace. Toolbars are attracted to activity in a linear way, i.e., the amount of force or velocity applied to a toolbar is equal regardless of distance to activity. However, there is also a repellant “inverse magnetism” force at work, which is NON linear. Specifically, ⁇ 1/d 2 . This will cause the toolbar to approach activity, but as the user gets closer, the tool bar will move away, thereby not impeding writing or drawing.
  • a region is a combination of three concepts from the Desktop paradigm: Lasso, Personal Workspace (or Device View), and Application Focus.
  • Regions are used to do the following: Create a workspace within which to draw or manipulate information; Define an area within which one or more objects can be grouped, moved, arranged, copied, and deleted; and Create an area that is yours, and yours alone to manipulate. This helps when we think about “Undo” in a collaborative environment!
  • Object Regions There are two different kinds of Regions: Object Regions and Canvas Regions.
  • Object Regions are connected to Objects, while Canvas Regions are separated from objects.
  • Canvas Regions are used to select multiple objects or declare a region of space within which to perform global functions, like drawing, copying, etc.
  • Object Regions have at two modes: Active, and Transform. (All inactive object regions are Transform regions.) Referring to FIG. 29 , a Transform Region is designed to allow users to scale and move the Region. Canvas Regions can scale and move indepently of anything else, while Regions which are linked to Objects will affect the Object it belongs to. Touching the corners of a Transform Region will allow the user to scale, and touching the middle will allow the object to be moved. Multitouch gestures like Pinch and Zoom work independently of these handles.
  • Regions There are a number of ways to create Regions. Referring to FIGS. 30 and 31 , a user can specify a region. This method requires that the user use two fingers to define a rectangular area by specifying opposing corners. After a Region has been created. the tool bar immediately appears.
  • FIG. 33 users with a stylus can simply begin drawing anywhere, at any time.
  • FIG. 34 if a user chooses Draw from a default Toolbar, a Region will be created automatically.
  • the default Toolbar has a very tiny Region that has no size yet, because it has not been given a boundary. Touching the Region Circle will cause a new Region to be created automatically, extending up to the Ergonomic Boundaries.
  • FIGS. 36A and 36B illustrate that a single tap on a clean canvass causes local canvas options to appear.
  • FIG. 37 illustrates that if a user chooses “Draw”, this puts the local UI Context into draw mode.
  • Local UI Contexts (or Regions) cover the Vertical Ergonomic Zone and the Horizontal Device Zone by default. In general, we want to avoid causing users to reach above their eye level and we want to avoid having them bend down.
  • the SuperSpace can extend a Region on-the-fly to fill adjacent area when needed. Region behavior is one of the most important, and complicated, components of Superspace UI design.
  • the tool palate always stays within the ergonomic zone. Also, a Region can be resized and moved just like any other object. It can also be used to cut & copy. Multiple Regions cannot overlap on the same device.
  • Travelling Regions As illustrated in FIGS. 41 , 42 and 43 , drawing from one object to another causes the original object region to deactivate. A new Canvas Region is immediately created which allows the drawing to continue not only onto the new object, but also the Canvas, other objects, or even the original object. Continuing to draw off the secondary object and into the canvas area has no effect.
  • the Canvas Region continues to adapt and follow the activity of the hand. A user can single tap any object within a Canvas Region to destroy the Canvas Region and activate the selected Object Region.
  • Closing a Region As illustrated in FIG. 44 , simply pressing the Close button will close the Toolbar, as well as the associated Region. As illustrated in FIG. 45 , closing an Object Region will only deactivate the Region. The object itself will remain unchanged. Also, as illustrated in FIG. 46 , touching outside of a region will close it also. This will work above and below the region, as it is assumed that this is inside someone's personal space, and it may work within a specified distance from the Region, perhaps six inches.
  • Region Options can be used to select and manipulate many objects at once. Some examples of arranging many objects at once can be seen in FIG. 49 .
  • Objects can be arranged or moved as a group. As illustrated in FIG. 51 , Objects can be removed as a group.
  • the primary object type is the super image.
  • the standard object is primarily for display, drawing, or annotating, so its tools are designed to support that feature. Colors, eraser tools, and other brush options are available.
  • Passive mode Illustrated in FIG. 52
  • a single touch-swipe causes movement.
  • a multi touch-swipe also causes movement.
  • a single tap activates/gives focus, and Pinch and Zoom cause scaling/translation.
  • Active mode Illustrated in FIG. 53
  • a single touch-swipe causes object-specific activity.
  • a multi touch-swipe (pawing) is object-specific. Single tapping may close the region and immediately create a new Canvas Region.
  • FIG. 54 illustrates a multipage document.
  • FIG. 55 illustrates a video player.
  • FIG. 56 illustrates a third party object in the passive mode
  • FIG. 58 illustrates it in the active mode
  • FIG. 59 illustrates a dumb window in the passive mode
  • FIG. 60 illustrates it in the active mode.
  • Object model is persistently linked to backend.
  • Possibilities include Skype, Video Conference, Audio Recorder, Video Player, RSS Viewer, Stock Ticker, Twitter Viewer, PDF Viewer.
  • Toolbars are linked to Regions
  • Object Regions are linked to Objects
  • Canvas Regions are linked to nothing!
  • Global Mode gives a user the ability to recall saved locations, zoom in and out, go back in time, switch sessions, and other “Meta” functions. If there are any Active Regions, that means someone is still working.
  • Global mode contains a powerful set of options which allow the user to administer high level meta features. All of the features possible with the Cloud Pane (adding and managing Apps & Assets) are available here. Primarily though this is a place for large-scale wayfinding, setting bookmarks (Pins), as well as scrubbing back through time. Only one global mode may be active on a device at a given time. Global mode does not run as a “window” on a client device; it completely replaces the default view for a session, which permits advanced rendering of the smaller windows contained within. Users can pan around the super space just as they would in local mode. Double tapping will return them to local mode. Users can still create regions here, but there is no drawing, and no “Active Mode” or Focus for Objects. The regions here are only for organizing, moving, and erasing objects.
  • the history browser allows a user to return to any point in time.
  • Historical activity is shown on a calendar view and also a scrollable, zoomable timeline which gives a “bird's eye” view of activity over the course of many days, months, or years.
  • Drawing is not permitted when zoomed out ( FIG. 62 ). But what we can do is automatically zoom in if drawing is attempted ( FIG. 63 ).
  • Pan and Zoom is one of the most common UI paradigms for touch screens. All devices should support this gesture.
  • One of the major problems with global actions like Pan and Zoom is that we don't want one user to interfere with another user's space. One way around this is to not allow Pan and Zoom if there are any active Regions on the device.
  • touch cells in ergonomic zone (see FIG. 57 ) is most useful.
  • the floor to ceiling viewing as a single screen is also critical.
  • the upper and lower portions as illustrated represent difficult to reach, hence the ergonomic zones are likely most common areas requiring touch.
  • touch cells primarily in the Ergonomic Zone. Roughly from Eye level to Waist level. This equates to center row of cells in 3 high landscape design. See illustration in FIG. 64 .
  • the outcome retains most of the wall's performance at a significantly lower cost. This both increases market penetration and differentiates the Thought Streaming wall solution.
  • a digital collaborative whiteboarding processor-implemented method embodiment comprising:
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • a digital collaborative whiteboarding system embodiment comprising:
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • the memory further storing instructions to:
  • the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • a chord-based gesturing processor-implemented method embodiment comprising:
  • the identification of the chord including a number of elements included in the chord
  • determining, via a processor, for each element included in the chord at least:
  • the touchscreen device is one of: a multi-user touchscreen device; and a mobile device.
  • a chord-based gesturing apparatus embodiment comprising:
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
  • chord identify a chord from the user touchscreen data input, the identification of the chord including a number of elements included in the chord;
  • gesture context for the chord includes a touchscreen object located at the spatial coordinate for the at least one chord element of the chord.
  • chord identify a chord from the user touchscreen data input, the identification of the chord including a number of elements included in the chord;
  • a digital whiteboard file system processor-implemented method embodiment comprising:
  • each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • tile content data structure includes metadata associated with the stored tile content.
  • a digital whiteboard system embodiment comprising:
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
  • the memory further stores instructions to:
  • each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • tile content data structure includes metadata associated with the stored tile content.
  • each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • tile content data structure includes metadata associated with the stored tile content.
  • a digital whiteboard viewer processor-implemented method embodiment comprising:
  • the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • the user whiteboarding instructions includes an instruction to display an evolution of tile content displayed within the client viewport screen via the touchscreen interface of the device.
  • tile content includes at least one of:
  • the client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • a digital whiteboard viewer apparatus embodiment comprising:
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
  • the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • the user whiteboarding instructions includes an instruction to display an evolution of tile content displayed within the client viewport screen via the touchscreen interface of the device.
  • tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • DWE data transmission and/or network framework, syntax structure, and/or the like
  • aspects of the DWE may be adapted for negotiations, mediation, group think studies, crowd-sourcing applications, and/or the like. While various embodiments and discussions of the DWE have been directed to digital collaboration, however, it is to be understood that the embodiments described herein may be readily configured and/or customized for a wide variety of other applications and/or implementations.

Abstract

The DIGITAL WORKSPACE ERGONOMICS APPARATUSES, METHODS AND SYSTEMS (“DWE”) transform user multi-element touchscreen gestures via DWE components into updated digital collaboration whiteboard objects. In one embodiment, the DWE obtains user whiteboard input from a client device participating in a digital collaborative whiteboarding session. The DWE parses the user whiteboard input to determine user instructions, and modifies a tile object included in the digital collaborative whiteboarding session according to the determined user instructions. The DWE generates updated client viewport content for the client device. Also, the DWE determines that client viewport content of a second client device should be modified because of modifying the tile object included in the digital whiteboard. The DWE generates updated client viewport content for the second client device after determining that the content of the second client device should be modified, and provides the updated client viewport content to the second client device.

Description

    PRIORITY CLAIM
  • This application claims priority under 35 USC §119 to U.S. Provisional Patent Application No. 61/697,248 filed Sep. 5, 2012, entitled “DIGITAL WHITEBOARD ERGONOMICS APPARATUSES, METHODS AND SYSTEMS,” Attorney Docket No. HAWT 1002-1. This application is also a continuation-in-part of, and claims priority under 35 U.S.C. §§120 and 365 to, United States Non-Provisional patent application Ser. No. 13/478,994, filed May 23, 2012, entitled “DIGITAL WHITEBOARD COLLABORATION APPARATUSES, METHODS AND SYSTEMS,” Attorney Docket No. HAWT 1001-2, which application claims priority under 35 USC §119 to U.S. Provisional Patent Application No. 61/489,238 filed May 23, 2011, entitled “DIGITAL WHITEBOARD COLLABORATION APPARATUSES, METHODS AND SYSTEMS,” Attorney Docket No. HAWT 1001-1. The entire contents of all the aforementioned applications are expressly incorporated by reference herein.
  • This application for letters patent discloses and describes various novel innovations and inventive aspects of DIGITAL WORKSPACE ERGONOMICS technology (hereinafter “disclosure”) and contains material that is subject to copyright and/or other intellectual property protection. The respective owners of such intellectual property have no objection to the facsimile reproduction of the disclosure by anyone as it appears in published Patent Office file/records, but otherwise reserve all rights.
  • FIELD
  • The present innovations generally address apparatuses, methods, and systems for digital collaboration, and more particularly, include DIGITAL WORKSPACE ERGONOMICS APPARATUSES, METHODS AND SYSTEMS (“DWE”).
  • BACKGROUND
  • In some instances, users may be required to work collaboratively with each other to achieve efficient results in their undertakings. Such users may sometimes be located remotely from each other. The collaborative interactions between such users may sometimes require communication of complex information.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying appendices and/or drawings illustrate various non-limiting, example, inventive aspects in accordance with the present disclosure:
  • FIGS. 1A-1K shows a block diagram illustrating example aspects of digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 2A-2B show data flow diagrams illustrating an example procedure to initiate a whiteboarding session for a user in some embodiments of the DWE;
  • FIGS. 3A-3B show logic flow diagrams illustrating example aspects of initiating a whiteboarding session for a user in some embodiments of the DWE, e.g., a Whiteboard Collaborator Session Initiation (“WCSI”) component 300;
  • FIG. 4 shows a logic flow diagram illustrating example aspects of generating viewport specification for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Client Viewport Specification (“CVS”) component 400;
  • FIG. 5 shows a logic flow diagram illustrating example aspects of generating viewport content for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Viewport Content Generation (“VCG”) component 500;
  • FIGS. 6A-6C show data flow diagrams illustrating an example procedure to facilitate collaborative whiteboarding among a plurality of users in some embodiments of the DWE;
  • FIGS. 7A-7D show logic flow diagrams illustrating example aspects of facilitating collaborative whiteboarding among a plurality of users in some embodiments of the DWE, e.g., a User Collaborative Whiteboarding (“UCW”) component 700;
  • FIGS. 8A-8I show block diagrams illustrating example aspects of a pie-menu user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 9A-9C show block diagrams illustrating example aspects of a chord-based user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIG. 10 shows a logic flow diagram illustrating example aspects of identifying user gestures of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a User Gesture Identification (“UGI”) component 1000;
  • FIGS. 11A-11B show block diagrams illustrating example aspects of a whiteboarding telepresence system for digital whiteboard collaboration in some embodiments of the DWE;
  • FIGS. 12A-12B show a block diagram and logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE; and
  • FIG. 13 shows a block diagram illustrating embodiments of a DWE controller.
  • FIG. 14 illustrates a conventional calculator.
  • FIG. 15 illustrates a traditional Desktop Application User Interface Paradigm.
  • FIGS. 16, 19, 36A, 36B, 37-40, 57 and 64 each illustrate images on a display with a user or users standing in front.
  • FIGS. 17, 18, 20, 23-29, 49-56 and 58-63 each illustrate images on a display.
  • FIGS. 21, 22, 30-35 and 41-48 each illustrate images on a display with a user's hand or finger touching it.
  • The leading number of each reference number within the drawings indicates the figure in which that reference number is introduced and/or detailed. As such, a detailed discussion of reference number 101 would be found and/or introduced in FIG. 1. Reference number 201 is introduced in FIG. 2, etc.
  • DETAILED DESCRIPTION Digital Workspace Ergonomics (DWE)
  • The DIGITAL WORKSPACE ERGONOMICS APPARATUSES, METHODS AND SYSTEMS (hereinafter “DWE”) transform user multi-element touchscreen gestures, via DWE components, into updated digital collaboration whiteboard objects. FIGS. 1A-1K shows a block diagram illustrating example aspects of digital whiteboard collaboration in some embodiments of the DWE. In some implementations, a plurality of users, e.g., 101 a-d, may desire to collaborate with each other in the creation of complex images, music, video, documents, and/or other media, e.g., 103 a-d. The users may be scattered across the globe in some instances. Users may utilize a variety of devices in order to collaborate with each other, e.g., 102 a-c. In some implementations, such devices may each accommodate a plurality of users (e.g., device 102 c accommodating users 101 c and 101 d). In some implementations, the DWE may utilize a central collaboration server, e.g., 105, and/or whiteboard database, e.g., 106, to achieve collaborative interaction between a plurality of devices, e.g., 104 a-c. In some implementations, the whiteboard database may have stored a digital whiteboard. For example, a digital collaboration whiteboard may be stored as data in memory, e.g., in whiteboard database 106. The data may, in various implementations, include image bitmaps, video objects, multi-page documents, scalable vector graphics, and/or the like. In some implementations, the digital collaboration whiteboard may be comprised of a plurality of logical subdivisions or tiles, e.g., 107 aa-107 mn. In some implementations, the digital whiteboard may be “infinite” in extent. For example, the number of logical subdivisions (tiles) may be as large as needed, subject only to memory storage and addressing considerations. For example, if the collaboration server utilizes 12-bit addressing, then the number of tile may be limited only by the addressing system, and or the amount of memory available in the whiteboard database.
  • In some implementations, each tile may be represented by a directory in a file storage system. For example, with reference to FIG. 1D, six tiles are included in one level of tiles, e.g., 108 a-f. For each tile, a directory may be created in the file system, e.g., 109 a-f. In some implementations, each tile may be comprised of a number of sub-tiles. For example, a level 1 tile, e.g., 110, may be comprised of a number of level 2 tiles, e.g., 111a-d. In such implementations, each sub-tile may be represented by a sub-folder in the file system, e.g., 113. In some implementations, tiles at each level may be comprised of sub-tiles of a lower level, thus generating a tree hierarchy structure, e.g., 112-114. In some implementations, a folder representing a tile may be storing a whiteboard object container. For example, a folder may be named according to its tile ID, e.g., 115. For example, a folder having tile ID [11 02 07 44] may represent the 44th tile at the further level, under the 7th tile at the third level, under the 2nd tile at the second level, under the 11th tile at the first level. In some implementations, such a folder may have stored whiteboard object container(s), e.g., 116 a-d. The contents of the whiteboard object container may represent the contents of the tile in the digital whiteboard. The object container may include files such as, but not limited to: bitmap images, scalable vector graphics (SVG) files, eXtensible Markup Language (XML)/JavaScript™ object notation files, and/or the like. Such files may include data on objects contained within the digital collaboration whiteboard.
  • In some implementations, each file stored within a tile folder may be named to represent a version number, a timestamp, and/or like identification, e.g., 117 a-d. Thus, various versions of each tile may be stored in a tile folder. In some implementations, each tile folder may include sub-folders representing layers of a tile of the digital whiteboard. Thus, in some implementations, each whiteboard may be comprised of various layers of tile objects superimposed upon each other.
  • In some implementations, the hierarchical tree structure of folders may be replaced by a set of folders, wherein the file names of the folders represent the tile level and layer numbers of each tile/layer in the digital whiteboard. Accordingly, in such implementations, sub-tile/layer folders need not be stored within their parent folders, but may be stored alongside the parent folders in a flat file structure.
  • In some implementations, a whiteboard object container, e.g., 118, may include data representing various tile object that may be display on the digital whiteboard. For example, the whiteboard object container may include data standalone videos 121 a (e.g., a link to a stored video), image objects, e.g., 121 b, multi-page documents, e.g., 121 c, freeform objects, e.g., 122, etc. In some implementations, the whiteboard object container may include a remote window object. For example, a remote window object may comprise a link to another object, e.g., a video, RSS feed, live video stream, client display screen, etc. For example, the link between the remote window object and any other object may be dynamically reconfigurable, e.g., 119. Thus, a remote window-linked object, e.g., 120 may be dynamically configured within the space reserved for the remote window within the digital whiteboard. Thus, for example, a randomly varying video, contents of an RSS feed, may be configured to display within the space reserved for the remote window.
  • In some implementations, object metadata may be associated with each tile object. For example, the metadata associated with a object may include a description of the object, object properties, and/or instructions for the DWE when the object is interrogated by a user (e.g., modified, viewed, clicked on, etc.). For example, an object may have associated XML-encoded data such as the example XML data provided below:
  • <tile_object>
      <general_properties>
        <object_id>AE1784</object_ID>
        <owner_ID>john.q.public@collaborate.com</owner_ID>
        <client_IP>129.88.79.102</client_IP>
        <last_modified>2011010122:15:07</last_modified>
        <drawdata_pointer>//11/02/07/44/20110401092255
        </drawdata_pointer>
      </general_properties>
      <display_properties>
        <origin>[25,251]</origin>
        <visible>true</visible>
        <shared>true</shared>
        <dumb_window_link>false</dumb_window_link>
        <svg width = “100%” height = “100%” version = “1.1”
          xmlns=“http://www.w3.org/2000/svg”>
          <circle cx=“250” cy=“75” r=“33” stroke=“blue”
          stroke-width=“2” fill=“yellow”/>
          <path d=“M250 150 L150 350 L350 350 Z” />
          <polyline points=“0,0 0,20 20,20 20,40 40,40 40,80”
          style=“fill:white;stroke:green;stroke-width:2”/>
          <polygon points=“280,75 300,210 170,275”
            style=“fill:#cc5500;
            stroke:#ee00ee;stroke-width:1”/>
        </svg>
      </display_properties>
      <context_instructions>
        <left_click>left_menu.csv</left_click>
        <right_click>right_menu.csv</right_click>
        <middle_click>middle_menu.csv</middle_click>
        <thumb_press>order:clear</thumb_press>
      </context_instructions>
    </tile_object>
  • In some implementations, a client connected to a whiteboard collaboration session may communicate with the collaboration server to obtain a view of a portion of the digital whiteboard. For example, a client 126 may have associated with it a client viewport, e.g., a portion of the digital whiteboard 127 that is projected onto the client's display, e.g., 128 a. In such implementations, the portion of tile objects, e.g., 129 a extending into the client viewport, e.g., 128 a, of the client, e.g., 126, may be depicted on the display of client 126. In some implementations, a user may modify the client viewport of the client. For example, the user may modify the shape of the client viewport, and/or the position of the client viewport. For example, with reference to FIG. 1I, the user may provide user input, e.g., touchscreen gestures 130, to modify the client viewport from its state in 128 a to its state in 128 b. Thus, the contents of the viewport may be modified from tile object 129 a to a portion of tile object 131. In such a scenario, the portion of tile object 131 within the extent of the modified client viewport will be displayed on the display of client 126. In some implementations, the user may modify a tile object, e.g., 129 a into modified tile object 129 b, e.g., via user input 130. In such implementations, the modified tile object may be displayed on the display of the client 126.
  • In some implementations, a plurality of users may be utilizing clients to view portions of a digital whiteboard. For example, with reference to FIG. 1J, client 133 a may receive client viewport data 135 a comprising a depiction of the tile objects extending into client viewport 134 a. Client 133 b may receive client viewport data 135 b comprising a depiction of the tile objects extending into client viewport 134 b. Similarly, client 133 c may receive client viewport data 135 c comprising a depiction of the tile objects extending into client viewport 134 c. In some scenarios, the client viewports of different client may not overlap (e.g., those of client 133 a and client 133 c). In other scenarios, the client viewports of two or more clients may overlap with each other, e.g., the client viewports 134 b and 134 c of clients 133 b and 133 c. In such scenarios, when a client modifies a tile object within the client's viewport, the modification of the tile object may be reflected in all viewports into which the modified portion of the tile object extends. Thus, in some implementations, a plurality of users may simultaneously observe the modification of a tile objects made by another user, facilitating collaborative editing of the tile objects.
  • In some implementations, a user may utilize a client, e.g., 137, to observe the modifications to a portion of a digital whiteboard across time/versions. For example, a user may position the client's viewport, e.g., 138, over a portion of the digital whiteboard (e.g., via user gestures into the client 137), and observe a time/version-evolution animation, e.g., 139, of that portion of the digital whiteboard on the client device's display using (time-stamped) versions, e.g., 136 a-d, of the digital whiteboard.
  • FIGS. 2A-2B show data flow diagrams illustrating an example procedure to initiate a whiteboarding session for a user in some embodiments of the DWE. In some implementations, a user, e.g., 201, may desire to join a collaborative whiteboarding session on a digital whiteboard. For example, the user may utilize a client, e.g., 202, to join the digital whiteboarding collaboration session. The client may be a client device such as, but not limited to, cellular telephone(s), smartphone(s) (e.g., iPhone®, Blackberry®, Android OS-based phones etc.), tablet computer(s) (e.g., Apple iPad™, HP Slate™, Motorola Xoom™, etc.), eBook reader(s) (e.g., Amazon Kindle™, Barnes and Noble's Nook™ eReader, etc.), laptop computer(s), notebook(s), netbook(s), gaming console(s) (e.g., XBOX Live™, Nintendo® DS, Sony PlayStation® Portable, etc.), portable scanner(s) and/or the like. The user may provide collaborate request input, e.g., 211, into the client, indicating the user's desire to join the collaborative whiteboarding session. In various implementations, the user input may include, but not be limited to: keyboard entry, mouse clicks, depressing buttons on a joystick/game console, (3D; stereoscopic, time-of-flight 3D, etc.) camera recognition (e.g., motion, body, hand, limb, facial expression, gesture recognition, and/or the like), voice commands, single/multi-touch gestures on a touch-sensitive interface, touching user interface elements on a touch-sensitive display, and/or the like. For example, the user may utilize user touchscreen input gestures such as, but not limited to, the gestures depicted in FIGS. 8A-8I and FIGS. 9A-9C. In some implementations, the client may identify the user collaborate request input. For example, the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10. Upon identifying the user collaborate request input, the client may generate and provide a user whiteboard request, e.g., 212, to a server, e.g., collaboration server 203. For example, the client may provide a (Secure) HyperText Transport Protocol (“HTTP(S)”) POST message with a message body encoded according to the eXtensible Markup Language (“XML”) and including the user collaborate request input information. An example of such a HTTP(S) POST message is provided below:
  • POST /join.php HTTP/1.1
    Host: www.collaborate.com
    Content-Type: Application/XML
    Content-Length: 324
    <?XML version = “1.0” encoding = “UTF-8”?>
    <join_request>
      <request_id>AJFY54</request_id>
      <timestamp>2010-05-23 21:44:12</timestamp>
      <user_ID>username@appserver.com</user_ID>
      <client_IP>275.37.57.98</client_IP>
      <client_MAC>EA-44-B6-F1</client_MAC>
      <session_id>4KJFH698</session_id>
      <session_name>work session 1</session_name>
    </join_request>
  • In some implementations, the server (e.g., collaboration server 203) may parse the user whiteboarding request, and extract user credentials, e.g., 213, from the user whiteboarding request. Based on the extracted user credentials, the server may generate an authentication query, e.g., 214, for a database, e.g., users database 204. For example, the server may query whether the user is authorized to join the collaborative whiteboarding session. For example, the server may execute a hypertext preprocessor (“PHP”) script including structured query language (“SQL”) commands to query the database for whether the user is authorized to join the collaborative whiteboarding session. An example of such a PHP/SQL command listing is provided below:
  • <?PHP
    header(‘Content-Type: text/plain’);
    mysql_connect(“254.93.179.112”,$DBserver,$password); // access
    database server
    mysql_select_db(“USERS.SQL”); // select database table to search
    //create query
    $query = “SELECT
    authorized_flag client_settings_list user_settings_list FROM
      UsersTable WHERE user_id LIKE ‘%’ $userid” AND
      client_mac LIKE ‘%’ $clientMAC”;
    $result = mysql_query($query); // perform the search query
    mysql_close(“USERS.SQL”); // close database access
    ?>
  • In response to obtaining the authentication query, e.g., 214, the database may provide, e.g., 215, an authentication response to the server. In some implementations, the server may determine, based on the authentication response, that the user is authorized to join the collaborative whiteboarding session. In such implementations, the server may parse the user whiteboarding request and/or the authentication response, and obtain client specifications for the client 202. For example, the server may extract client specifications including, but not limited to: display size, resolution, orientation, frame rate, contrast ratio, pixel count, color scheme, aspect ratio, 3D capability, and/or the like. In some implementations, using the client viewport specifications, the server may generate a query for tile objects that lie within the viewport of the client. For example, the server may provide a tile objects query, e.g., 219, to a database, e.g., whiteboard database 205, requesting information on tile objects which may form part of the client viewport content displayed on the client 202. For example, the server may provide the tile IDs of the tiles which overlap with the client viewport, and request a listing of tile object IDs and tile object data for object which may partially reside within the tile IDs. An example PHP/SQL command listing for querying a database for tile objects data within a single tile ID is provided below:
  • <?PHP
    header(‘Content-Type: text/plain’);
    mysql_connect(“254.93.179.112”,$DBserver,$password); // access
    database server
    mysql_select_db(“OBJECTS.SQL”); // select database table to search
    //create query
    $query = “SELECT
    object_id object_data WHERE tile_id LIKE ‘%’ $tileID”;
    $result = mysql_query($query); // perform the search query
    mysql_close(“OBJECTS.SQL”); // close database access
    ?>
  • In some implementations, the database may, in response to the tile objects query 219, provide the requested tile objects data, e.g., 220. For example, the database may provide a data structure representative of a scalable vector illustration, e.g., a Scalable Vector Graphics (“SVG”) data file. The data structure may include, for example, data representing a vector illustration. For example, the data structure may describe a scalable vector illustration having one or more objects in the illustration. Each object may be comprised of one or more paths prescribing, e.g., the boundaries of the object. Further, each path may be comprised of one or more line segments. For example, a number of very small line segments may be combined end-to-end to describe a curved path. A plurality of such paths, for example, may be combined in order to form a closed or open object. Each of the line segments in the vector illustration may have start and/or end anchor points with discrete position coordinates for each point. Further, each of the anchor points may comprise one or more control handles. For example, the control handles may describe the slope of a line segment terminating at the anchor point. Further, objects in a vector illustration represented by the data structure may have stroke and/or fill properties specifying patterns to be used for outlining and/or filling the object. Further information stored in the data structure may include, but not be limited to: motion paths for objects, paths, line segments, anchor points, etc. in the illustration (e.g., for animations, games, video, etc.), groupings of objects, composite paths for objects, layering information (e.g., which objects are on top, and which objects appear as if underneath other objects, etc.) and/or the like. For example, the data structure including data on the scalable vector illustration may be encoded according to the open XML-based Scalable Vector Graphics “SVG” standard developed by the World Wide Web Consortium (“W3C”). An exemplary XML-encoded SVG data file, written substantially according to the W3C SVG standard, and including data for a vector illustration comprising a circle, an open path, a closed polyline composed of a plurality of line segments, and a polygon, is provided below:
  • <?XML version = “1.0” standalone = “no”>
    <!DOCTYPE svg PUBLIC ″-//W3C//DTD SVG 1.1//EN″
      “http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd”>
    <svg width = “100%” height = “100%” version = “1.1”
      xmlns=“http://www.w3.org/2000/svg”>
      <circle cx=″250″ cy=″75″ r=″33″ stroke=″blue″
      stroke-width=″2″ fill=″yellow″/>
      <path d=″M250 150 L150 350 L350 350 Z″ />
      <polyline points=″0,0 0,20 20,20 20,40 40,40 40,80″
      style=″fill:white;stroke:green;stroke-width:2″/>
      <polygon points=″280,75 300,210 170,275″
      style=″fill:#cc5500;
      stroke:#ee00ee;stroke-width:1″/>
    </svg>
  • In some implementations, the server may generate client viewport data (e.g., bitmap, SVG file, video stream, RSS feed, etc.) using the tile objects data and client viewport specifications, e.g. 223. The server may provide the generated client viewport data and client viewport specifications as whiteboard session details and client viewport data, e.g., 224.
  • In some implementations, the client may render, e.g. 225, the visualization represented in the client viewport data for display to the user. For example, the client may be executing an Adobe® Flash object within a browser environment including ActionScript™ 3.0 commands to render the visualization represented in the data structure, and display the rendered visualization for the user. Exemplary commands, written substantially in a form adapted to ActionScript™ 3.0, for rendering a visualization of a scene within an Adobe® Flash object with appropriate dimensions and specified image quality are provided below:
  • // import necessary modules/functions
    import flash.display.BitmapData;
    import flash.geom.*;
    import com.adobe.images.JPGEncoder;
    // generate empty bitmap with appropriate dimensions
    var bitSource:BitmapData = new BitmapData (sketch_mc.width,
    sketch_mc.height);
    // capture snapsot of movie clip in bitmap
    bitSource.draw(sketch_mc);
    var imgSource:Image = new Image( );
    imgSource.load(new Bitmap(bitSource, “auto”, true));
    // generate scaling constants for 1280 × 1024 HD output
    var res:Number = 1280 / max(sketch_mc.width, sketch_mc.height);
    var width:Number = round(sketch_mc.width * res);
    var height:Number = round(sketch_mc.height * res);
    // scale the image
    imgSource.content.width = width;
    // JPEG-encode bitmap with 85% JPEG compression image quality
    var jpgEncoder:JPGEncoder = new JPGEncoder(85);
    var jpgStream:ByteArray = jpgEncoder.encode(jpgSource);
  • In some implementations, the client may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 226, in order to produce continuous motion of the objects displayed on the visual display unit connected to the client. In some implementations, the DWE may contain a library of pre-rendered images and visual objects indexed to be associated with one or more of search result terms or phrases, such as Clip Art files, e.g., accessible through Microsoft® PowerPoint application software.
  • FIGS. 3A-3B show logic flow diagrams illustrating example aspects of initiating a whiteboarding session for a user in some embodiments of the DWE, e.g., a Whiteboard Collaborator Session Initiation (“WCSI”) component 300. In some implementations, a user may desire to join a collaborative whiteboarding session on a digital whiteboard. For example, the user may utilize a client to join the digital whiteboarding collaboration session. The user may provide collaborate request input, e.g., 301, into the client, requesting that the user join the whiteboarding session (e.g., via a whiteboarding app installed and/or executing on the client, such as an iPhone®/iPad® app, Adobe® Flash object, JavaScript™ code executing within a browser environment, application executable (*.exe) file, etc.). In some implementations, the client may identify the user collaborate request input. For example, the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10. Upon identifying the user collaborate request input, the client may generate and provide a user whiteboard request, e.g., 302, to a collaboration server. In some implementations, the collaboration server may parse the user whiteboarding request and extract user credentials, e.g., 303. Example parsers that the server may utilize are described further below in the discussion with reference to FIG. 13. Based on the extracted user credentials, the server may generate an authentication query, e.g., 304, for a users database, e.g., by executing PHP/SQL commands similar to the examples above. In some implementations, the database may provide an authentication response, e.g., 305. The server may parse the obtained authentication response, and extract the authentication status of the user/client, e.g., 306. If the user is not authenticated, e.g., 307, option “No,” the server may generate a login failure message, and/or may initiate an error handling routine, e.g., 308.
  • In some implementations, upon authentication of the user/client, e.g., 307, option “Yes,” the server may generate a collaborator acknowledgment, e.g., 309, for the user/client. The client may obtain the server's collaborator acknowledgment, e.g., 310, and in some implementations, display the acknowledgment for the user, e.g., 311.
  • In some implementations, the server may parse the user whiteboarding request and/or the authentication response, and obtain client specifications for the client. For example, the server may extract client specifications including, but not limited to: display size, resolution, orientation, frame rate, contrast ratio, pixel count, color scheme, aspect ratio, 3D capability, and/or the like, using parsers such as those described further below in the discussion with reference to FIG. 13. In some implementations, e.g., where the client viewport specifications have not been previously generated for the client being used by the user, the server may generate client viewport specifications using the specifications of the client. For example, the server may utilize a component such as the example client viewport specification component 400 discussed further below with reference to FIG. 4. In some implementations, using the client viewport specifications, the server may generate a query for tile objects that lie within the viewport of the client. For example, the server may provide a tile objects query, e.g., 314, to a whiteboard database 205, requesting information on tile objects which may form part of the client viewport content displayed on the client. For example, the server may provide the tile IDs of the tiles which overlap with the client viewport, and request a listing of tile object IDs and tile object data for object which may partially reside within the tile IDs. In some implementations, the database may, in response to the tile objects query 314, provide the requested tile objects data, e.g., 315. In some implementations, the server may generate a whiteboard session object, e.g., 316, using the client viewport specifications and/or the tile objects data. In some implementations, the server may store the whiteboard session object to a database, e.g., 317. In some implementations, the server may generate client viewport data (e.g., bitmap, SVG file, video stream, RSS feed, etc.) using the tile objects data and client viewport specifications, e.g. 318. The server may provide the generated client viewport data and client viewport specifications, e.g., 319, to the client. In some implementations, the client may render, e.g. 320, the visualization represented in the client viewport data for display to the user and/or continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 321, in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • FIG. 4 shows a logic flow diagram illustrating example aspects of generating viewport specification for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Client Viewport Specification (“CVS”) component 400. In some implementations, a DWE component, e.g., a collaboration server, may obtain a request, e.g., 401, to generate new and/or updated client viewport specifications for a client of a user involved in, or seeking to join, a whiteboarding session within the DWE. For example, the request may be in the form of a HTTP(S) POST message with XML-encoded message body, similar to the examples provided above. The DWE may parse the request, and extract a client ID from the request. The DWE may generate a query, e.g., 403, for existing client viewport specifications associated with the client ID. For example, the DWE may utilize PHP/SQL commands to query a database, similar to the examples provided above. If an existing client viewport specification is available for the given client ID, e.g., 404, option “Yes,” the DWE may obtain the existing client viewport specification, e.g., for a database. The DWE may parse the request, and extract any operations required to be performed on the existing client viewport specification (e.g., if the request is for updating the client viewport specification). For example, the request may include a plurality of client viewport modification instructions (e.g., convert viewport from rectangular shape to circular shape, modify the zoom level of the viewport, modify the aspect ratio of the viewport, modify the position of the viewport, etc.). The DWE may select each instruction, e.g., 407, and calculate an updated client viewport specification based on the instruction using the previous version of the client viewport specification, e.g., 408. In some implementations, the DWE may operate on the client viewport specifications using each of the instructions, e.g., 409, until all client viewport modification operations have been performed, e.g., 409, option “No.” In some implementations, the DWE may return the updated client viewport specifications, e.g., 413.
  • In some implementations, the DWE may determine that there are no existing client viewport specifications. In such implementations, the DWE may generate client viewport specification data variables, e.g., display size, resolution, shape, aspect ratio, zoom level, [x,y] position, whiteboard layers visible, etc., e.g., 410. The DWE may initially set default values for each of the client viewport specification variables. The DWE may obtain the client device specifications (e.g., client's display monitor size, pixel count, color depth, resolution, etc.), e.g., 411. Based on the client's actual specifications, the DWE may calculate updated client viewport specification using the client device specifications and the default values set for each of the client viewport specification variables. The DWE may return the calculated updated client viewport specifications, e.g., 413.
  • FIG. 5 shows a logic flow diagram illustrating example aspects of generating viewport content for a client of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a Viewport Content Generation (“VCG”) component 500. In some implementations, a component of the DWE (e.g., collaboration server) may obtain a request to update/generate client viewport data to provide for a client involved in a whiteboarding session, e.g., 501. In some implementations, the DWE may parse the request, and extract a client ID from the request, e.g., 502. The DWE may generate a query, e.g., 503, for client viewport specifications associated with the client ID. For example, the DWE may utilize PHP/SQL commands to query a database, similar to the examples provided above. The DWE may obtain the existing client viewport specification, e.g., from a database, e.g., 504. In some implementations, the DWE may determine tile IDs of whiteboard tiles that overlap with the client viewport of the client, e.g., 505. For example, the DWE may calculate the extent of the client viewport using the client viewport specifications (e.g., position coordinates and length/width). Based on the extent of the client viewport, the DWE may determine which of the tile the client viewport extends into, and obtain the tile IDs of the determined whiteboard tiles. In some implementations, the DWE may obtain tile object data for all tile objects that lie within the tile IDs into which the client viewport extends. For example, the DWE may query, e.g., 506, for tile objects data of all tile objects that extend into tiles that the client viewport also extends into. For example, the DWE may obtain such data from a database, e.g., 507. In some implementations, using the tile objects data, the DWE may generate a rendered bitmap of the tiles corresponding to the determined tile IDs using the tile objects data, e.g., 508. In alternate implementations, the DWE may generate SVG files, video, documents, and/or the like, objects that may be displayed on the client's display monitor. In some implementations, the DWE may determine a portion of the rendered bitmap that overlaps with the client viewport, based on the client viewport specifications, e.g., 509. The DWE may extract the determined portion of the rendered bitmap, e.g., 510, and provide the portion as updated client viewport data to the client, e.g., 511.
  • FIGS. 6A-6C show data flow diagrams illustrating an example procedure to facilitate collaborative whiteboarding among a plurality of users in some embodiments of the DWE. In some implementations, a user, e.g., 601 a, may desire to collaborate with other users, e.g., users 601 b-c (FIG. 6C), in a collaborative whiteboarding session. For example, the user may desire to modify the contents of a digital whiteboard (e.g., one of a plurality of digital whiteboards) included within the collaborative whiteboarding session. For example, the user may utilize a client, e.g., 602 a, to participate in the digital whiteboarding collaboration session. The user may provide whiteboard input, e.g., 611, into the client, indicating the user's desire to modify the collaborative whiteboarding session (e.g., modify the contents of a digital whiteboard; modify a participating client's view of a digital whiteboard, etc.). In various implementations, the whiteboard input may include, but not be limited to: keyboard entry, mouse clicks, depressing buttons on a joystick/game console, (3D; stereoscopic, time-of-flight 3D, etc.) camera recognition (e.g., motion, body, hand, limb, facial expression, gesture recognition, and/or the like), voice commands, single/multi-touch gestures on a touch-sensitive interface, touching user interface elements on a touch-sensitive display, and/or the like. For example, the user may utilize user touchscreen input gestures such as, but not limited to, the gestures depicted in FIGS. 8A-8I and FIGS. 9A-9C.
  • In some implementations, the client may capture the user's whiteboard input, e.g., 612. The client may identify the user's whiteboard input in some implementations. For example, the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10, to identify gesture(s) made by the user on a touchscreen display of the client to modify the collaborative whiteboarding session. Upon identifying the user whiteboard input, the client may generate and provide a whiteboard input message, e.g., 613, to a server, e.g., collaboration server 603. For example, the client may provide a (Secure) HyperText Transport Protocol (“HTTP(S)”) POST message with an XML-encoded message body including the user whiteboard input and/or identified user gesture(s). An example of such a HTTP(S) POST message is provided below:
  • POST /session.php HTTP/1.1
    Host: www.collaborate.com
    Content-Type: Application/XML
    Content-Length: 229
    <?XML version = “1.0” encoding = “UTF-8”?>
    <user_input>
      <log_id>AJFY54</log_id>
      <timestamp>2010-05-23 21:44:12</timestamp>
      <user_ID>username@appserver.com</user_ID>
      <client_IP>275.37.57.98</client_IP>
      <client_MAC>EA-44-B6-F1</client_MAC>
      <session_id>4KJFH698</session_id>
      <gestures>
        <1><id>FDKI28</id><related_text>john.q.public
        </related_text></1>
        <2><id>DJ38FF</id><related_text>see marked
        changes</related_text></2>
      </gestures>
    </user_input>
  • In some implementations, the server (e.g., collaboration server 603) may parse the user whiteboard input, and extract the user ID, client ID, and/or user gestures from the whiteboard input message, e.g., 614. Based on the extracted information, the server may generate a whiteboard session query, e.g., 615, for the gesture context, e.g., the viewport content of the client 602 a being used by the user. For example, the server may query a database, e.g., whiteboard database 605, for the client viewport specifications and tile objects corresponding to the client viewport specifications. An example PHP/SQL command listing for querying a database for client viewport specifications, and tile objects data within a single tile ID, is provided below:
  • <?PHP
    header(‘Content-Type: text/plain’);
    mysql_connect(“254.93.179.112”,$DBserver,$password); // access
    database server
    mysql_select_db(“USERS.SQL”); // select database table to search
    //create query
    $query = “SELECT client_viewport_coordinates WHERE
      client_id LIKE ‘%’ $clientID”;
    $result = mysql_query($query); // perform the search query
    mysql_close(“USERS.SQL”); // close database access
    mysql_select_db(“OBJECTS.SQL”); // select database table to search
    //create query
    $query = “SELECT object_id object_data WHERE tile_id
    LIKE ‘%’ $tileID”;
    $result = mysql_query($query); // perform the search query
    mysql_close(“OBJECTS.SQL”); // close database access
    ?>
  • In some implementations, the database may, in response to the whiteboard session query, provide the requested client viewport specifications and tile objects data, e.g., whiteboard session object 616. For example, the database may provide an SVG data file representing the tile objects and/or an XML data file representing the client viewport specifications.
  • In some implementations, the server may determine the user's intended instructions based on the user's gestures and the gesture context, e.g., as retrieved from the database. For example, the user's intended instructions may depend on the context within which the user gestures were made. For example, each user gesture may have a pre-specified meaning depending on the type of tile object upon which the user gesture was made. For example, a particular user gesture may have a pre-specified meaning depending on whether the object above which the gesture was made was a video, or a multi-page document. In some implementations, the tile object on which the gesture was made may include gesture/context interpretation instructions, which the server may utilize to determine the appropriate instructions intended by the user. In alternate implementations, the server and/or databases may have stored gesture/context interpretation instructions for each type of object (e.g., image, SVG vector image, video, remote window, etc.), and similar user instructions may be inferred from a user gesture above all objects of a certain type.
  • In some implementations, the server may extract the user gesture context, e.g., 617, from the user whiteboard session object. Using the gesture context (e.g., tile object data), the server may query a database, e.g., gestures database 606, for user instructions lookup corresponding to the user gestures and/or user gesture context. An example PHP/SQL command listing for querying a database for user instruction lookup is provided below:
  • <?PHP
    header(‘Content-Type: text/plain’);
    mysql_connect(“254.93.179.112”,$DBserver,$password); // access
    database server
    mysql_select_db(“GESTURES.SQL”); // select database table to search
    //create query
    $query = “SELECT user_instruction WHERE gesture_id LIKE
      ‘%’ $gestureID” AND copntext LIKE ‘%’ $user_context”;
    $result = mysql_query($query); // perform the search query
    mysql_close(“GESTURES.SQL”); // close database access
    ?>
  • In some implementations, the database may, in response to the user instruction lookup request, provide the requested user instruction lookup response, e.g., 619. In some implementations, the server may also query, e.g., 621, for tile objects within the client's viewport (e.g., using PHP/SQL commands similar to the examples above), and obtain, e.g., 622, from the whiteboard database 605, the tile objects data pertaining to tile objects within the viewport of the client.
  • In some implementations, the server may parse the user instruction lookup response and extract instructions to execute from the response. For example, the user instruction lookup response may include instructions to modify tile objects and/or instructions to modify the client viewport(s) of client(s) in the whiteboarding session. In some implementations, the server may extract tile object modification instructions, e.g., 623, and generate updated tile objects based on the existing tile object data and the extract tile object modification instructions. In some implementations, the server may parse the user instruction lookup response and extract instructions to modify the viewport of client(s). The server may generate, e.g., 624, updated client viewport specifications and/or client viewport data using the updated tile objects, existing client viewport specifications, and/or extracted client viewport modification instructions. In some implementations, e.g., where the tile objects have been modified, the server may query (e.g., via PHP/SQL commands) for clients whose viewport contents should be modified to account for the modification of the tile objects and/or client viewport specifications, e.g., 625. The server may provide, e.g., 626, the query to the whiteboard database, and obtain, e.g., 627, a list of clients whose viewport contents have been affected by the modification. In some implementations, the server may refresh the affected clients' viewports. For example, the server may generate, for each affected client, updated client viewport specifications and/or client viewport content using the (updated) client viewport specifications and/or (updated) tile objects data, e.g., 629. In some implementations, the server may store, e.g., 630-631, the updated tile objects data and/or updated client viewport specifications (e.g., via updated whiteboard session objects, updated client viewport data, etc.). In some implementations, the server may provide the (updated) whiteboard session details and/or (updated) client viewport data, e.g., 632 a-c, to each of the affected client(s), e.g., 601 a-c. In some implementations, the client(s) may render, e.g. 633 a-c, the visualization represented in the client viewport data for display to the user, e.g., using data and/or program module(s) similar to the examples provided above with reference to FIG. 2. In some implementations, the client(s) may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 633 a-c, in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • FIGS. 7A-7D show logic flow diagrams illustrating example aspects of facilitating collaborative whiteboarding among a plurality of users in some embodiments of the DWE, e.g., a User Collaborative Whiteboarding (“UCW”) component 700. In some implementations, a user may desire to collaborate with other users in a collaborative whiteboarding session. For example, the user may desire to modify the contents of a digital whiteboard (e.g., one of a plurality of digital whiteboards) included within the collaborative whiteboarding session. The user may provide whiteboard input, e.g., 701, within a whiteboarding session into the client, indicating the user's desire to modify the collaborative whiteboarding session (e.g., modify the contents of a digital whiteboard; modify a participating client's view of a digital whiteboard, etc.). In some implementations, the client may capture the user's whiteboard input. The client may identify the user's whiteboard input in some implementations, e.g., 702. For example, the client may utilize a user input identification component such as the User Gesture Identification (“UGI”) component 1000 described below in FIG. 10, to identify gesture(s) made by the user on a touchscreen display of the client to modify the collaborative whiteboarding session. Upon identifying the user whiteboard input, the client may generate and provide a whiteboard input message, e.g., 703, to a collaboration server.
  • In some implementations, the server may parse the user whiteboard input, and extract the user ID, client ID, etc. from the whiteboard input message, e.g., 704. Based on the extracted information, the server may generate a whiteboard session query, e.g., 705, for the gesture context, e.g., the viewport content of the client being used by the user. In some implementations, a database may, in response to the whiteboard session query, provide the requested client viewport specifications and tile objects data, e.g., whiteboard session object 706. For example, the database may provide an SVG data file representing the tile objects and/or an XML data file representing the client viewport specifications.
  • In some implementations, the server may parse the whiteboard session object, and extract user context, e.g., client viewport specifications, tile object IDs of tile objects extending into the client viewport, client app mode (e.g., drawing/editing/viewing, etc., portrait/landscape, etc.), e.g., 707. The server may parse the whiteboard session object and extract user gesture(s) made by the user into the client during the whiteboard session, e.g., 708. The server may attempt to determine the user's intended instructions based on the user's gestures and the gesture context, e.g., as retrieved from the database. For example, the user's intended instructions may depend on the context within which the user gestures were made. For example, each user gesture may have a pre-specified meaning depending on the type of tile object upon which the user gesture was made. For example, a particular user gesture may have a pre-specified meaning depending on whether the object above which the gesture was made was a video, or a multi-page document. In some implementations, the tile object on which the gesture was made may include custom object-specific gesture/context interpretation instructions, which the server may utilize to determine the appropriate instructions intended by the user. In alternate implementations, the server and/or databases may have stored system-wide gesture/context interpretation instructions for each type of object (e.g., image, SVG vector image, video, remote window, etc.), and similar user instructions may be inferred from a user gesture above all objects of a certain type.
  • In some implementations, the server may query a whiteboard database for user instructions lookup corresponding to the user gestures and/or user gesture context, e.g., 709. The database may, in response to the user instruction lookup request, provide the requested user instruction lookup response, e.g., 710. In some implementations, the server may also query for tile objects within the client's viewport the tile objects data pertaining to tile objects within the viewport of the client.
  • In some implementations, the server may parse the user instruction lookup response and extract instructions to execute from the response, e.g., 711. For example, the user instruction lookup response may include instructions to modify tile objects and/or instructions to modify the client viewport(s) of client(s) in the whiteboarding session. In some implementations, the server may extract tile object modification instructions, e.g., 712. The server may modify tile object data of the tile objects in accordance with the tile object modifications instructions. For example, the server may select a tile object modification instruction, e.g., 714. The server may parse the tile object modification instruction, and extract object IDs of the tile object(s) to be operated on, e.g., 715. Using the tile object modification instructions, the server may determine the operations to be performed on the tile object(s). In some implementations, the server may generate a query for the tile object data of the tile object(s) to be operated on, e.g., 716, and obtain the tile object data, e.g., 717, from a database. The server may generate updated tile object data for each of the tile objects operated on, using the current tile object data and the tile object modification operations from the tile modification instructions, e.g., 718. In some implementations, the server may store the updated tile object data in a database, e.g., 719. In some implementations, the server may repeat the above procedure until all tile object modification instructions have been executed, see, e.g., 713.
  • In some implementations, the server may parse the user instruction lookup response, e.g., 720, and extract client viewport modification instructions, e.g., 721. The server may modify client viewport specifications of the client(s) in accordance with the viewport modifications instructions. For example, the server may select a viewport instruction, e.g., 723. The server may parse the viewport modification instruction, and extract client IDs for which updated viewport specifications are to be generated, e.g., 724. Using the viewport modification instructions, the server may determine the operations to be performed on the client viewport specifications. In some implementations, the server may generate a whiteboard object query for the viewport specifications to be operated, e.g., 725, and obtain the whiteboard session object including the viewport specifications, e.g., 726, from a database. The server may generate updated client viewport specifications for each of the client viewports being operated on, using the current client viewport specifications and the viewport modification operations from the viewport modification instructions, e.g., 727. For example, the server may utilize a component such as the client viewport specification component 400 described with reference to FIG. 4. In some implementations, the server may store the updated client viewport specifications via an updated whiteboard specification object in a database, e.g., 728. In some implementations, the server may repeat the above procedure until all tile object modification instructions have been executed, see, e.g., 722.
  • In some implementations, e.g., where the tile objects and/or client viewport specifications have been modified, the server may query (e.g., via PHP/SQL commands) for clients whose viewport contents should be modified to account for the modification of the tile objects and/or client viewport specifications, e.g., 729-730. The server may provide the queries to the whiteboard database, and obtain, e.g., 731, a list of clients whose viewport contents have been affected by the modification. In some implementations, the server may refresh the affected clients' viewports. For example, the server may generate, e.g., 732, for each affected client, updated client viewport specifications and/or client viewport content using the (updated) client viewport specifications and/or (updated) tile objects data. For example, the server may utilize a component such as the viewport content generation component 500 described with reference to FIG. 5. In some implementations, the server may store, e.g., 733, the updated tile objects data and/or updated client viewport specifications (e.g., via updated whiteboard session objects, updated client viewport data, etc.). In some implementations, the server may provide the (updated) whiteboard session details and/or (updated) client viewport data, e.g., 734, to each of the affected client(s). In some implementations, the client(s) may render, e.g., 735, the visualization represented in the client viewport data for display to the user, e.g., using data and/or program module(s) similar to the examples provided above with reference to FIG. 2. In some implementations, the client(s) may continuously generate new scalable vector illustrations, render them in real time, and provide the rendered output to the visual display unit, e.g. 736, in order to produce continuous motion of the objects displayed on the visual display unit connected to the client.
  • FIGS. 8A-8I show block diagrams illustrating example aspects of a pie-menu user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE. In some implementations, the DWE may provide a variety of features for the user when the user provides input gestures into a client device involved in a digital collaborative whiteboarding session. For example, under a main menu 801, the DWE may provide a variety of palette/drawing tools 802, library tools 803 and/or mini-map/finder tools 804. For example, the DWE may provide a variety of palette/drawing tools, including but not limited to: colors 802 a, stroke type 802 b, precision drawing mode 802 c, eraser 802 d, cut 802 e, effects 802 f, styles 802 g, tags 802 h, undo feature 802 i, and/or the like. As another example, the DWE may provide library tools such as, but not limited to: import/open file 803 a, access clipboard 803 b, and/or the like 803 c. As another example, the DWE may provide mini-map/finder tools such as, but not limited to: zoom 804 a, collaborators 804 b, bookmarks 804 c, timeline view 804 d, and/or the like.
  • In some implementations, a user may access a main menu by pressing the touchscreen with a single finger, e.g., 805. In some implementations, a menu, such a pie menu, e.g., 807, may be provided for the user when the user attempts to access the main menu by pressing a single finger on the touchscreen, e.g., 806. In some implementations, the user may press a stylus against the touchscreen, e.g., 808. In some implementations, the menu options provided to the user may vary depending on whether the user uses a single finger touch or a single stylus touch.
  • In some implementations, a user may access a drawing menu by swiping down on the touchscreen with three fingers, e.g., 809. In some implementations, a menu, such a drawing menu, e.g., 811, may be provided for the user when the user attempts to access the drawing menu by swiping three fingers on the touchscreen, e.g., 810. In some implementations, a drawing palette may include a variety of tools. For example, the drawing palette may include a drawing tool selector, e.g., 811, for selecting tools from a drawing palette. In some implementations, a variety of commonly used drawing tools may be provided separately for the user to easily access. For example, an eraser tool, 811 a, cut tool 811 b, tag tool 811 c, help tool 811 d, and/or the like may be provided as separate user interface objects for the user to access.
  • In some implementations, a user may select a color from a color picker tool within the drawing palette menu. For example, the user may swipe three fingers on the touchscreen to obtain the drawing palette, e.g., 812. From the drawing palette, the user may select a color picker by tapping on an active color picker, e.g., 813. Upon tapping the color picker, a color picker menu, e.g., 814 may be provided for the user via the user interface.
  • In some implementations, a user may tag an object within the digital whiteboard, e.g., 815. For example, within the drawing palette, the user may tap on a user interface element, e.g., 816. In response, the user may be provided with a virtual keyboard 818, as well as a virtual entry form 817 for the user to type a tag into via the virtual keyboard.
  • In some implementations, a user may enter into a precision drawing mode, wherein the user may be able to accurately place/draw tile objects. For example, the user may place two fingers on the touchscreen and hold the finger positions. For the duration that the user holds the two-finger precision drawing gesture, the user may be provided with precision drawing capabilities. For example, the user may be able to precisely draw a line to the length, orientation and placement of the user's choosing, e.g., 820. Similarly, using other drawing tools, the user may be able to draw precise circles, e.g., 821, rectangles, e.g., 822, and/or the like. In general, it is contemplated the precision of any drawing tool provided may be enhanced by entering into the precision drawings mode by using the two-finger hold gesture.
  • In some implementations, a user may be able to toggle between an erase and draw mode using a two-finger swipe. For example, if the user swipes downwards, an erase mode may be enabled, e.g., 824, while if the user swipes upwards, the draw mode may be enabled, e.g., 825.
  • In some implementations, a user may be able to an overall map of the whiteboard by swiping all five fingers down simultaneously, e.g., 826. Upon performing a five-finger swipe, e.g., 827, a map of the digital whiteboard, e.g., 828 may be provided for the user. In some implementations, the user may be able to zoom in or zoom out of a portion of the digital whiteboard by using two fingers, and moving the two fingers either together (e.g., zoom out) or away from each other (e.g., zoom in), see, e.g., 829. In such an access map mode, a variety of features and/or information may be provided for the user. For example, the user may be provided with a micromap, which may provide an indication of the location of the user's client viewport relative to the rest of the digital whiteboard. The user may be provided with information on other users connected to the whiteboarding session, objects within the whiteboard, tags providing information on owners of objects in the whiteboard, etc., a timeline of activity showing the amount of activity as a function of time, and/or the like information and/or features.
  • FIGS. 9A-9C show block diagrams illustrating example aspects of a chord-based user whiteboarding gesture system for digital whiteboard collaboration in some embodiments of the DWE. With reference to FIG. 9A, in some implementations, a chord-based gesture system may utilize a number of variables to determine the meaning of a user gesture, e.g., the intentions of a user to instruct the DWE. For example, variables such as, but not limited to: number of fingers/styli inputs in the chord 901, pressure and area of application of pressure on each chord element 902, contextual information about the object underneath the chord 903, displacement, velocity, direction of the chord movement 904, timing associated with the chord (e.g., length of hold, pause, frequency/duty cycle of tapping, etc.), and/or the like, may affect the interpretation of what instructions are intended by a gesture made by the user. For example, with reference to FIG. 9B, chords of various types may be utilized to obtain menus, perform drawing, editing, erasing features, modify the view of the client, find editing collaborators, and/or the like, see, e.g., 906. For example, displacing a single finger of an empty portion of the screen may automatically result in a draw mode, and a line may be drawn on the screen following the path of the single finger, e.g., 907. As another example, holding a finger down and releasing quickly may provide a precision drawing mode, wherein when a finger is drawn along the screen, a line may be drawn with high precision following the path of the finger, e.g., 908-909. As another example, holding a finger down and releasing after a longer time may provide menu instead of a precision drawing mode, e.g., 910. As another example, when three fingers are placed on the screen in the vicinity of each other, an eraser tool may be provided underneath the position of the three-finger chord. When the three-finger chord is displaced, an eraser tool may also be displaced underneath the chord, thus erasing (portion of) objects over which the chord is passed by the user, e.g., 911. As another example, with reference to FIG. 9C, when two fingers as held down and quickly released, a zoom tool may be provided for the user. The user may then place two fingers down on the screen, and move the fingers together or away from each other to zoom out or zoom in respectively, e.g., 912. As another example, when two fingers are placed down and held for a longer period of time, this may provide the user with a tool to select an object on the screen, and modify the object (e.g., modify the scale, aspect ratio, etc. of the object), e.g., 913. As another example, when four or five fingers are placed down on the screen and quickly released, the user may be provided with a pan function, e.g., 914. As another example, when a user double-taps on a pan indicator, the user may be provided with a zoon and/or overview selection user interface element, e.g., 915. As the example above describe, various gesture features may be provided depending on the attributes of the chord, including, but not limited to: the number of chord elements, timing of the chord elements, pressure/area of the chord elements, displacement/velocity/acceleration/orientation of the chord elements, and/or the like.
  • FIG. 10 shows a logic flow diagram illustrating example aspects of identifying user gestures of a whiteboarding session collaborator in some embodiments of the DWE, e.g., a User Gesture Identification (“UGI”) component 1000. In some implementations, a user may provide input (e.g., one of more touchscreen gestures) into a client, e.g., 1001. The client may obtain the user input raw data, and identify a chord based on the raw data. For example, the client may determine the number of fingers pressed onto the touchscreen, whether a stylus was incorporated in the user touch raw data, which fingers of the user were pressed onto the touchscreen, and/or the like, e.g., 1002. The client may determine the spatial coordinates of each of the chord elements (e.g., wherein each simultaneous finger/stylus touch is a chord element of the chord comprised of the finger/stylus touches), e.g., 1003. For example, the client may determine the [x,y] coordinates for each chord element. In some implementations, the client may determine the touch screen pressure for each chord element, area of contact for each chord element (e.g., which may also be used to determine whether a chord element is a finger or a stylus touch, etc.), e.g., 1004. In some implementations, the client may determine time parameters for each chord element of the chord, e.g., 1005. For example, the client may determine such parameters such as hold duration, touch frequency, touch interval, pause time, etc. for each chord element of the chord and/or an average time for each such parameter for the entire chord. In some implementations, the client may determine motion parameters for each chord element of the chord, e.g., 1006. For example, the client may determine displacement, direction vector, acceleration, velocity, etc. for each chord element of the chord. Based on the chord, the client may determine whether the chord gesture is for modifying a client view, or for modifying a tile object present in a digital whiteboard. In some implementations, the client may generate a query (e.g., of a database stored in the client's memory) to determine whether the identified chord operates on the client viewport or tile objects. If the client determines that the chord operates on a viewport, e.g., 1008, option “Yes,” the client may generate a query for a gesture identifier, and associated instructions using the chord, spatial location, touchscreen pressure, time parameters, motion parameters, and/or the like. If the client determines that the chord operates on tile object(s), e.g., 1008, option “No,” the client may identify the tile object(s) affected by the user input using the location and motion parameters for the chord elements, e.g., 1010. The client may determine whether the tile object(s) has any associated context/gesture interpretation instructions/data, e.g., 1011. If the object does not have custom context instructions, e.g., 1012, option “No,” the client may utilize system-wide context interpretation instructions based on the object type of the tile object, e.g., 1013. If the object has custom context instructions, e.g., 1012, option “Yes,” the client may obtain the customer object-specific context interpretation instructions, e.g., 1014. In some implementations, the client may determine the gesture identifier and associated instructions using the chord, spatial location, touchscreen pressure, time parameters and motion parameters, as well as object/system-specified context interpretation instructions, e.g., 1015, and may return the user gesture identifier and associated gesture instructions, e.g., 1016. It is to be understood that any of the actions recited above may be performed by the client and/or any other entity and/or component of the DWE.
  • FIGS. 11A-11B show block diagrams illustrating example aspects of a whiteboarding telepresence system for digital whiteboard collaboration in some embodiments of the DWE. In some implementations, a plurality of users may be collaborating with each other, for example, via a digital whiteboard collaboration system as described above. In some implementations, the users may be interacting with each other via other communication and/or collaboration systems. In some implementations, a user, e.g., 1101 a, may desire to visually communicate with another user, e.g., 1101 b. The user 1101 a may be utilizing a touchscreen interface, e.g., 1102 a, and user 1101 b may be utilizing touchscreen interface 1102 b. For example, the touchscreen interfaces may be operating in conjunction with other DWE components to provide a digital whiteboard collaboration experience for the users. In some implementations, the user may utilize a telepresence system, e.g., 1103 a-b, to enhance the collaborative session between the users. For example, a user 1101 a may be able to visualize 1101 b via the telepresence system. The user 1101 a may be able to hear (e.g., via a speaker system) and see (e.g., via auxiliary display) user 1101 b. The user 1101 a may also be able to speak to user 1101 b via a microphone, and may be able to provide a video of himself (e.g., via a camera). Similarly, user 1101 b may be able to see and hear user 1101, and provide audio and video to user 1101 a via user 1101 b's telepresence interface.
  • In some implementations, users utilizing different types of device may interactively collaborate via a telepresence system. For example, with reference to FIG. 11B, user 1104 a may be utilizing a large-screen touch interface, e.g., 1105 a, while a user 1104 b may be utilizing a portable device, e.g., 1105 b. In such implementations, the user interface of the collaborative session, as well as the telepresence system, may be modified according to the device being used by the user in the collaborative session. For example, the user 1104 a, utilizing the large-screen touch interface 1105 a, may be utilizing an auxiliary telepresence system 1106 a. The user 1104 b may, however, be utilizing a telepresence system inbuilt into the device, e.g., 1106 b. Accordingly, in some implementations, the users may interact with each other via telepresence for collaborative editing across a variety of user devices.
  • FIGS. 12A-12B show a block diagram and logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE. With reference to FIG. 12A, in some embodiments, the DWE may provide a multi-user touchscreen device 1201 (see also FIG. 1, 102 c). The touchscreen device may include “display-only” cells 1202 that do not have any touch capability (e.g., LCD, LED displays, projection displays, etc.). In addition, the touchscreen device may include strategically positioned touch cells, e.g., 1203, with which a user 1208 may interact using gestures such as those described above in the description with respect to FIGS. 8A-8I. For example, the touch cells may be placed within an ergonomic zone 1204 designed such that an average user would be likely to feel comfortable accessing the ergonomic zone to provide touch gestures into the touch cell.
  • In some embodiments, the multi-user touchscreen display may be augmented to provide continuous projection via a display system aligned adjacent to the touchscreen display along a non-parallel (or even non-linear/non-euclidian) plane such as a wall, screen, or structure (e.g., curved surface of a building).
  • In some embodiments, a portion of the touch cells within the ergonomic zone may be enabled at any time. For example, a camera may be placed in the vicinity of the touchscreen device, and it may record video of the neighborhood of the touchscreen device. The camera may record users in the neighborhood. The DWE may identify such users in the video, determine the touch cells within the ergonomic zone that are closest to the identified users, and may enable those touch cells alone for the users to provide touchscreen input into the touch cells. In some embodiments, the DWE may provide visual indicators of the enabled touch cells so that users may identify easily the touch cells that are enabled. In some embodiments, a user may provide touch input into a touch cell, making it an “active” touch cell, e.g., 1205. In such scenarios, the DWE may provide a floating toolbar for the user to access features provided by the DWE for digital whiteboard collaboration. The position of the floating toolbar may automatically be determined by the DWE based on the touch cells that are active, and the coordinate locations of touch input provided by the users into the active and enabled touch cells.
  • FIG. 12B shows a logic flow diagram illustrating example aspects of digital whiteboard ergonomics in some embodiments of the DWE, e.g., a User Whiteboard Ergonomics (“UWE”) component. In some embodiments, the DWE may obtain touch input from one or more users into an active and enabled touch cells within a multi-user touchscreen device, e.g., 1211. The DWE may determine if any users are using touch cells, and identify the number of users, e.g., 1212. The DWE may select a user, e.g., 1213, and identify the touch cell being used by that user, e.g., 1214. For example, the DWE may utilize procedures such as those within the UGI component of FIG. 10. Based on the touch cell ID number, the DWE may set a coarse position for a floating toolbar within the active touch cell being used by the selected user. Then, the DWE may identify the coordinate position (e.g., [x,y,z]) at which the user is specifically applying input at any time, e.g., 1216. Based on the coordinate position of the user input, the DWE may set a fine position within the active cell using the coordinate position of the user activity, e.g., 1217. The DWE may generate a floating toolbar display, e.g., based on the specific user gesture being provided by the user (e.g., as determined using the UGI component of FIG. 10), e.g., 1218. The DWE may display the generated floating toolbar at the fine coordinate position set based on the ID number of the active cell and the user touch input location, e.g., 1219. The DWE may perform this procedure for all user, see e.g., 1220.
  • DWE Controller
  • FIG. 13 shows a block diagram illustrating example aspects of a DWE controller 1301. In this embodiment, the DWE controller 1301 may serve to aggregate, process, store, search, serve, identify, instruct, generate, match, and/or facilitate interactions with a computer through various technologies, and/or other related data.
  • Users, e.g., 1333 a, which may be people and/or other systems, may engage information technology systems (e.g., computers) to facilitate information processing. In turn, computers employ processors to process information; such processors 1303 may be referred to as central processing units (CPU). One form of processor is referred to as a microprocessor. CPUs use communicative circuits to pass binary encoded signals acting as instructions to enable various operations. These instructions may be operational and/or data instructions containing and/or referencing other instructions and data in various processor accessible and operable areas of memory 1329 (e.g., registers, cache memory, random access memory, etc.). Such communicative instructions may be stored and/or transmitted in batches (e.g., batches of instructions) as programs and/or data components to facilitate desired operations. These stored instruction codes, e.g., programs, may engage the CPU circuit components and other motherboard and/or system components to perform desired operations. One type of program is a computer operating system, which, may be executed by CPU on a computer; the operating system enables and facilitates users to access and operate computer information technology and resources. Some resources that may be employed in information technology systems include: input and output mechanisms through which data may pass into and out of a computer; memory storage into which data may be saved; and processors by which information may be processed. These information technology systems may be used to collect data for later retrieval, analysis, and manipulation, which may be facilitated through a database program. These information technology systems provide interfaces that allow users to access and operate various system components.
  • In one embodiment, the DWE controller 1301 may be connected to and/or communicate with entities such as, but not limited to: one or more users from user input devices 1311; peripheral devices 1312; an optional cryptographic processor device 1328; and/or a communications network 1313. For example, the DWE controller 1301 may be connected to and/or communicate with users, e.g., 1333 a, operating client device(s), e.g., 1333 b, including, but not limited to, personal computer(s), server(s) and/or various mobile device(s) including, but not limited to, cellular telephone(s), smartphone(s) (e.g., iPhone®, Blackberry®, Android OS-based phones etc.), tablet computer(s) (e.g., Apple iPad™, HP Slate™, Motorola Xoom™, etc.), eBook reader(s) (e.g., Amazon Kindle™, Barnes and Noble's Nook™ eReader, etc.), laptop computer(s), notebook(s), netbook(s), gaming console(s) (e.g., XBOX Live™, Nintendo® DS, Sony PlayStation® Portable, etc.), portable scanner(s), and/or the like.
  • Networks are commonly thought to comprise the interconnection and interoperation of clients, servers, and intermediary nodes in a graph topology. It should be noted that the term “server” as used throughout this application refers generally to a computer, other device, program, or combination thereof that processes and responds to the requests of remote users across a communications network. Servers serve their information to requesting “clients.” The term “client” as used herein refers generally to a computer, program, other device, user and/or combination thereof that is capable of processing and making requests and obtaining and processing any responses from servers across a communications network. A computer, other device, program, or combination thereof that facilitates, processes information and requests, and/or furthers the passage of information from a source user to a destination user is commonly referred to as a “node.” Networks are generally thought to facilitate the transfer of information from source points to destinations. A node specifically tasked with furthering the passage of information from a source to a destination is commonly called a “router.” There are many forms of networks such as Local Area Networks (LANs), Pico networks, Wide Area Networks (WANs), Wireless Networks (WLANs), etc. For example, the Internet is generally accepted as being an interconnection of a multitude of networks whereby remote clients and servers may access and interoperate with one another.
  • The DWE controller 1301 may be based on computer systems that may comprise, but are not limited to, components such as: a computer systemization 1302 connected to memory 1329.
  • Computer Systemization
  • A computer systemization 1302 may comprise a clock 1330, central processing unit (“CPU(s)” and/or “processor(s)” (these terms are used interchangeably throughout the disclosure unless noted to the contrary)) 1303, a memory 1329 (e.g., a read only memory (ROM) 1306, a random access memory (RAM) 1305, etc.), and/or an interface bus 1307, and most frequently, although not necessarily, are all interconnected and/or communicating through a system bus 1304 on one or more (mother)board(s) 1302 having conductive and/or otherwise transportive circuit pathways through which instructions (e.g., binary encoded signals) may travel to effectuate communications, operations, storage, etc. The computer systemization may be connected to a power source 1386; e.g., optionally the power source may be internal. Optionally, a cryptographic processor 1326 and/or transceivers (e.g., ICs) 1374 may be connected to the system bus. In another embodiment, the cryptographic processor and/or transceivers may be connected as either internal and/or external peripheral devices 1312 via the interface bus I/O. In turn, the transceivers may be connected to antenna(s) 1375, thereby effectuating wireless transmission and reception of various communication and/or sensor protocols; for example the antenna(s) may connect to: a Texas Instruments WiLink WL1283 transceiver chip (e.g., providing 802.11n, Bluetooth 3.0, FM, global positioning system (GPS) (thereby allowing DWE controller to determine its location)); Broadcom BCM4329FKUBG transceiver chip (e.g., providing 802.11n, Bluetooth 2.1 +EDR, FM, etc.), BCM28150 (HSPA+) and BCM2076 (Bluetooth 4.0, GPS, etc.); a Broadcom BCM47501UB8 receiver chip (e.g., GPS); an Infineon Technologies X-Gold 618-PMB9800 (e.g., providing 2G/3G HSDPA/HSUPA communications); Intel's XMM 7160 (LTE & DC-HSPA), Qualcom's CDMA(2000), Mobile Data/Station Modem, Snapdragon; and/or the like. The system clock may have a crystal oscillator and generates a base signal through the computer systemization's circuit pathways. The clock may be coupled to the system bus and various clock multipliers that will increase or decrease the base operating frequency for other components interconnected in the computer systemization. The clock and various components in a computer systemization drive signals embodying information throughout the system. Such transmission and reception of instructions embodying information throughout a computer systemization may be referred to as communications. These communicative instructions may further be transmitted, received, and the cause of return and/or reply communications beyond the instant computer systemization to: communications networks, input devices, other computer systemizations, peripheral devices, and/or the like. It should be understood that in alternative embodiments, any of the above components may be connected directly to one another, connected to the CPU, and/or organized in numerous variations employed as exemplified by various computer systems.
  • The CPU comprises at least one high-speed data processor adequate to execute program components for executing user and/or system-generated requests. Often, the processors themselves will incorporate various specialized processing units, such as, but not limited to: floating point units, integer processing units, integrated system (bus) controllers, logic operating units, memory management control units, etc., and even specialized processing sub-units like graphics processing units, digital signal processing units, and/or the like. Additionally, processors may include internal fast access addressable memory, and be capable of mapping and addressing memory 1329 beyond the processor itself; internal memory may include, but is not limited to: fast registers, various levels of cache memory (e.g., level 1, 2, 3, etc.), RAM, etc. The processor may access this memory through the use of a memory address space that is accessible via instruction address, which the processor can construct and decode allowing it to access a circuit path to a specific memory address space having a memory state/value. The CPU may be a microprocessor such as: AMD's Athlon, Duron and/or Opteron; ARM's classic (e.g., ARM7/9/11), embedded (Coretx-M/R), application (Cortex-A), embedded and secure processors; IBM and/or Motorola's DragonBall and PowerPC; IBM's and Sony's Cell processor; Intel's Atom, Celeron (Mobile), Core (2/Duo/i3/i5/i7), Itanium, Pentium, Xeon, and/or XScale; and/or the like processor(s). The CPU interacts with memory through instruction passing through conductive and/or transportive conduits (e.g., (printed) electronic and/or optic circuits) to execute stored instructions (i.e., program code). Such instruction passing facilitates communication within the DWE controller and beyond through various interfaces. Should processing requirements dictate a greater amount speed and/or capacity, distributed processors (e.g., Distributed DWE), mainframe, multi-core, parallel, and/or super-computer architectures may similarly be employed. Alternatively, should deployment requirements dictate greater portability, smaller mobile devices (e.g., smartphones, Personal Digital Assistants (PDAs), etc.) may be employed.
  • Depending on the particular implementation, features of the DWE may be achieved by implementing a microcontroller such as CAST's R8051XC2 microcontroller; Intel's MCS 51 (i.e., 8051 microcontroller); and/or the like. Also, to implement certain features of the DWE, some feature implementations may rely on embedded components, such as: Application-Specific Integrated Circuit (“ASIC”), Digital Signal Processing (“DSP”), Field Programmable Gate Array (“FPGA”), and/or the like embedded technology. For example, any of the DWE component collection (distributed or otherwise) and/or features may be implemented via the microprocessor and/or via embedded components; e.g., via ASIC, coprocessor, DSP, FPGA, and/or the like. Alternately, some implementations of the DWE may be implemented with embedded components that are configured and used to achieve a variety of features or signal processing.
  • Depending on the particular implementation, the embedded components may include software solutions, hardware solutions, and/or some combination of both hardware/software solutions. For example, DWE features discussed herein may be achieved through implementing FPGAs, which are a semiconductor devices containing programmable logic components called “logic blocks”, and programmable interconnects, such as the high performance FPGA Virtex series and/or the low cost Spartan series manufactured by Xilinx. Logic blocks and interconnects can be programmed by the customer or designer, after the FPGA is manufactured, to implement any of the DWE features. A hierarchy of programmable interconnects allow logic blocks to be interconnected as needed by the DWE system designer/administrator, somewhat like a one-chip programmable breadboard. An FPGA's logic blocks can be programmed to perform the operation of basic logic gates such as AND, and XOR, or more complex combinational operators such as decoders or simple mathematical operations. In most FPGAs, the logic blocks also include memory elements, which may be circuit flip-flops or more complete blocks of memory. In some circumstances, the DWE may be developed on regular FPGAs and then migrated into a fixed version that more resembles ASIC implementations. Alternate or coordinating implementations may migrate DWE controller features to a final ASIC instead of or in addition to FPGAs. Depending on the implementation all of the aforementioned embedded components and microprocessors may be considered the “CPU” and/or “processor” for the DWE.
  • Power Source
  • The power source 1386 may be of any standard form for powering small electronic circuit board devices such as the following power cells: alkaline, lithium hydride, lithium ion, lithium polymer, nickel cadmium, solar cells, and/or the like. Other types of AC or DC power sources may be used as well. In the case of solar cells, in one embodiment, the case provides an aperture through which the solar cell may capture photonic energy. The power cell 1386 is connected to at least one of the interconnected subsequent components of the DWE thereby providing an electric current to all the interconnected components. In one example, the power source 1386 is connected to the system bus component 1304. In an alternative embodiment, an outside power source 1386 is provided through a connection across the I/O 1308 interface. For example, a USB and/or IEEE 1394 connection carries both data and power across the connection and is therefore a suitable source of power.
  • Interface Adapters
  • Interface bus(ses) 1307 may accept, connect, and/or communicate to a number of interface adapters, frequently, although not necessarily in the form of adapter cards, such as but not limited to: input output interfaces (I/O) 1308, storage interfaces 1309, network interfaces 1310, and/or the like. Optionally, cryptographic processor interfaces 1327 similarly may be connected to the interface bus. The interface bus provides for the communications of interface adapters with one another as well as with other components of the computer systemization. Interface adapters are adapted for a compatible interface bus. Interface adapters may connect to the interface bus via expansion and/or slot architecture. Various expansion and/or slot architectures may be employed, such as, but not limited to: Accelerated Graphics Port (AGP), Card Bus, ExpressCard, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), Thunderbolt, and/or the like.
  • Storage interfaces 1309 may accept, communicate, and/or connect to a number of storage devices such as, but not limited to: storage devices 1314, removable disc devices, and/or the like. Storage interfaces may employ connection protocols such as, but not limited to: (Ultra) (Serial) Advanced Technology Attachment (Packet Interface) ((Ultra) (Serial) ATA(PI)), (Enhanced) Integrated Drive Electronics ((E)IDE), Institute of Electrical and Electronics Engineers (IEEE) 1394, Ethernet, fiber channel, Small Computer Systems Interface (SCSI), Thunderbolt, Universal Serial Bus (USB), and/or the like.
  • Network interfaces 1310 may accept, communicate, and/or connect to a communications network 1313. Through a communications network 1313, the DWE controller is accessible through remote clients 1333 b (e.g., computers with web browsers) by users 1333 a. Network interfaces may employ connection protocols such as, but not limited to: direct connect, Ethernet (thick, thin, twisted pair 10/100/1000 Base T, and/or the like), Token Ring, wireless connection such as IEEE 802.11a-x, and/or the like. Should processing requirements dictate a greater amount speed and/or capacity, distributed network controllers (e.g., Distributed DWE), architectures may similarly be employed to pool, load balance, and/or otherwise increase the communicative bandwidth required by the DWE controller. A communications network may be any one and/or the combination of the following: a direct interconnection; the Internet; a Local Area Network (LAN); a Metropolitan Area Network (MAN); an Operating Missions as Nodes on the Internet (OMNI); a secured custom connection; a Wide Area Network (WAN); a wireless network (e.g., employing protocols such as, but not limited to a Wireless Application Protocol (WAP), I-mode, and/or the like); and/or the like. A network interface may be regarded as a specialized form of an input output interface. Further, multiple network interfaces 1310 may be used to engage with various communications network types 1313. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and/or unicast networks.
  • Input Output interfaces (I/O) 1308 may accept, communicate, and/or connect to user input devices 1311, peripheral devices 1312, cryptographic processor devices 1328, and/or the like. I/O may employ connection protocols such as, but not limited to: audio: analog, digital, monaural, RCA, stereo, and/or the like; data: Apple Desktop Bus (ADB), Bluetooth, IEEE 1394a-b, serial, universal serial bus (USB); infrared; joystick; keyboard; midi; optical; PC AT; PS/2; parallel; radio; video interface: Apple Desktop Connector (ADC), BNC, coaxial, component, composite, digital, DisplayPort, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), RCA, RF antennae, S-Video, VGA, and/or the like; wireless transceivers: 802.11a/b/g/n/x; Bluetooth; cellular (e.g., code division multiple access (CDMA), high speed packet access (HSPA(+)), high-speed downlink packet access (HSDPA), global system for mobile communications (GSM), long term evolution (LTE), WiMax, etc.); and/or the like. One output device may be a video display, which may take the form of a Cathode Ray Tube (CRT), Liquid Crystal Display (LCD), Light Emitting Diode (LED), Organic Light Emitting Diode (OLED), Plasma, and/or the like based monitor with an interface (e.g., VGA, DVI circuitry and cable) that accepts signals from a video interface. The video interface composites information generated by a computer systemization and generates video signals based on the composited information in a video memory frame. Another output device is a television set, which accepts signals from a video interface. Often, the video interface provides the composited video information through a video connection interface that accepts a video display interface (e.g., an RCA composite video connector accepting an RCA composite video cable; a DVI connector accepting a DVI display cable, HDMI, etc.).
  • User input devices 1311 often are a type of peripheral device 1312 (see below) and may include: card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, microphones, mouse (mice), remote controls, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors (e.g., accelerometers, ambient light, GPS, gyroscopes, proximity, etc.), styluses, and/or the like.
  • Peripheral devices 1312 may be connected and/or communicate to I/O and/or other facilities of the like such as network interfaces, storage interfaces, directly to the interface bus, system bus, the CPU, and/or the like. Peripheral devices may be external, internal and/or part of the DWE controller. Peripheral devices may include: antenna, audio devices (e.g., line-in, line-out, microphone input, speakers, etc.), cameras (e.g., still, video, webcam, etc.), dongles (e.g., for copy protection, ensuring secure transactions with a digital signature, and/or the like), external processors (for added capabilities; e.g., crypto devices 1328), force-feedback devices (e.g., vibrating motors), near field communication (NFC) devices, network interfaces, printers, radio frequency identifiers (RFIDs), scanners, storage devices, transceivers (e.g., cellular, GPS, etc.), video devices (e.g., goggles, monitors, etc.), video sources, visors, and/or the like. Peripheral devices often include types of input devices (e.g., microphones, cameras, etc.).
  • It should be noted that although user input devices and peripheral devices may be employed, the DWE controller may be embodied as an embedded, dedicated, and/or monitor-less (i.e., headless) device, wherein access would be provided over a network interface connection.
  • Cryptographic units such as, but not limited to, microcontrollers, processors 1326, interfaces 1327, and/or devices 1328 may be attached, and/or communicate with the DWE controller. A MC68HC16 microcontroller, manufactured by Motorola Inc., may be used for and/or within cryptographic units. The MC68HC16 microcontroller utilizes a 16-bit multiply-and-accumulate instruction in the 16 MHz configuration and requires less than one second to perform a 512-bit RSA private key operation. Cryptographic units support the authentication of communications from interacting agents, as well as allowing for anonymous transactions. Cryptographic units may also be configured as part of the CPU. Equivalent microcontrollers and/or processors may also be used. Other commercially available specialized cryptographic processors include: the Broadcom's CryptoNetX and other Security Processors; nCipher's nShield (e.g., Solo, Connect, etc.), SafeNet's Luna PCI (e.g., 7100) series; Semaphore Communications' 40 MHz Roadrunner 184; sMIP's (e.g., 208956); Sun's Cryptographic Accelerators (e.g., Accelerator 6000 PCIe Board, Accelerator 500 Daughtercard); Via Nano Processor (e.g., L2100, L2200, U2400) line, which is capable of performing 500+MB/s of cryptographic instructions; VLSI Technology's 33 MHz 6868; and/or the like.
  • Memory
  • Generally, any mechanization and/or embodiment allowing a processor to affect the storage and/or retrieval of information is regarded as memory 1329. However, memory is a fungible technology and resource, thus, any number of memory embodiments may be employed in lieu of or in concert with one another. It is to be understood that the DWE controller and/or a computer systemization may employ various forms of memory 1329. For example, a computer systemization may be configured wherein the operation of on-chip CPU memory (e.g., registers), RAM, ROM, and any other storage devices are provided by a paper punch tape or paper punch card mechanism; however, such an embodiment would result in an extremely slow rate of operation. In one configuration, memory 1329 may include ROM 1306, RAM 1305, and a storage device 1314. A storage device 1314 may employ any number of computer storage devices/systems. Storage devices may include a drum; a (fixed and/or removable) magnetic disk drive; a magneto-optical drive; an optical drive (i.e., Blueray, CD ROM/RAM/Recordable (R)/ReWritable (RW), DVD R/RW, HD DVD R/RW etc.); an array of devices (e.g., Redundant Array of Independent Disks (RAID)); solid state memory devices (USB memory, solid state drives (SSD), etc.); other processor-readable storage mediums; and/or other devices of the like. Thus, a computer systemization generally requires and makes use of memory.
  • Component Collection
  • The memory 1329 may contain a collection of program and/or database components and/or data such as, but not limited to: operating system component(s) 1315 (operating system); information server component(s) 1316 (information server); user interface component(s) 1317 (user interface); Web browser component(s) 1318 (Web browser); database(s) 1319; mail server component(s) 1321; mail client component(s) 1322; cryptographic server component(s) 1320 (cryptographic server); the DWE component(s) 1335; and/or the like (i.e., collectively a component collection). These components may be stored and accessed from the storage devices and/or from storage devices accessible through an interface bus. Although non-conventional program components such as those in the component collection may be stored in a local storage device 1314, they may also be loaded and/or stored in memory such as: peripheral devices, RAM, remote storage facilities through a communications network, ROM, various forms of memory, and/or the like.
  • Operating System
  • The operating system component 1315 is an executable program component facilitating the operation of the DWE controller. The operating system may facilitate access of I/O, network interfaces, peripheral devices, storage devices, and/or the like. The operating system may be a highly fault tolerant, scalable, and secure system such as: Apple Macintosh OS X (Server); AT&T Plan 9; Be OS; Unix and Unix-like system distributions (such as AT&T's UNIX; Berkley Software Distribution (BSD) variations such as FreeBSD, NetBSD, OpenBSD, and/or the like; Linux distributions such as Red Hat, Ubuntu, and/or the like); and/or the like operating systems. However, more limited and/or less secure operating systems also may be employed such as Apple Macintosh OS, IBM OS/2, Microsoft DOS, Microsoft Windows 2000/2003/3.1/95/98/CE/Millenium/NT/Vista/XP (Server), Palm OS, and/or the like. In addition, emobile operating systems such as Apple's iOS, Google's Android, Hewlett Packard's WebOS, Microsofts Windows Mobile, and/or the like may be employed. Any of these operating systems may be embedded within the hardware of the NICK controller, and/or stored/loaded into memory/storage. An operating system may communicate to and/or with other components in a component collection, including itself, and/or the like. Most frequently, the operating system communicates with other program components, user interfaces, and/or the like. For example, the operating system may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses. The operating system, once executed by the CPU, may enable the interaction with communications networks, data, I/O, peripheral devices, program components, memory, user input devices, and/or the like. The operating system may provide communications protocols that allow the DWE controller to communicate with other entities through a communications network 1313. Various communication protocols may be used by the DWE controller as a subcarrier transport mechanism for interaction, such as, but not limited to: multicast, TCP/IP, UDP, unicast, and/or the like.
  • Information Server
  • An information server component 1316 is a stored program component that is executed by a CPU. The information server may be an Internet information server such as, but not limited to Apache Software Foundation's Apache, Microsoft's Internet Information Server, and/or the like. The information server may allow for the execution of program components through facilities such as Active Server Page (ASP), ActiveX, (ANSI) (Objective-) C (++), C# and/or .NET, Common Gateway Interface (CGI) scripts, dynamic (D) hypertext markup language (HTML), FLASH, Java, JavaScript, Practical Extraction Report Language (PERL), Hypertext Pre-Processor (PHP), pipes, Python, wireless application protocol (WAP), WebObjects, and/or the like. The information server may support secure communications protocols such as, but not limited to, File Transfer Protocol (FTP); HyperText Transfer Protocol (HTTP); Secure Hypertext Transfer Protocol (HTTPS), Secure Socket Layer (SSL), messaging protocols (e.g., America Online (AOL) Instant Messenger (AIM), Apple's iMessage, Application Exchange (APEX), ICQ, Internet Relay Chat (IRC), Microsoft Network (MSN) Messenger Service, Presence and Instant Messaging Protocol (PRIM), Internet Engineering Task Force's (IETF's) Session Initiation Protocol (SIP), SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE), open XML-based Extensible Messaging and Presence Protocol (XMPP) (i.e., Jabber or Open Mobile Alliance's (OMA's) Instant Messaging and Presence Service (IMPS)), Yahoo! Instant Messenger Service, and/or the like. The information server provides results in the form of Web pages to Web browsers, and allows for the manipulated generation of the Web pages through interaction with other program components. After a Domain Name System (DNS) resolution portion of an HTTP request is resolved to a particular information server, the information server resolves requests for information at specified locations on the DWE controller based on the remainder of the HTTP request. For example, a request such as http://123.124.125.126/myInformation.html might have the IP portion of the request “123.124.125.126” resolved by a DNS server to an information server at that IP address; that information server might in turn further parse the http request for the “/myInformation.html” portion of the request and resolve it to a location in memory containing the information “myInformation.html.” Additionally, other information serving protocols may be employed across various ports, e.g., FTP communications across port 21, and/or the like. An information server may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the information server communicates with the DWE database 1319, operating systems, other program components, user interfaces, Web browsers, and/or the like.
  • Access to the DWE database may be achieved through a number of database bridge mechanisms such as through scripting languages as enumerated below (e.g., CGI) and through inter-application communication channels as enumerated below (e.g., CORBA, WebObjects, etc.). Any data requests through a Web browser are parsed through the bridge mechanism into appropriate grammars as required by the DWE. In one embodiment, the information server would provide a Web form accessible by a Web browser. Entries made into supplied fields in the Web form are tagged as having been entered into the particular fields, and parsed as such. The entered terms are then passed along with the field tags, which act to instruct the parser to generate queries directed to appropriate tables and/or fields. In one embodiment, the parser may generate queries in standard SQL by instantiating a search string with the proper join/select commands based on the tagged text entries, wherein the resulting command is provided over the bridge mechanism to the DWE as a query. Upon generating query results from the query, the results are passed over the bridge mechanism, and may be parsed for formatting and generation of a new results Web page by the bridge mechanism. Such a new results Web page is then provided to the information server, which may supply it to the requesting Web browser.
  • Also, an information server may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • User Interface
  • Computer interfaces in some respects are similar to automobile operation interfaces. Automobile operation interface elements such as steering wheels, gearshifts, and speedometers facilitate the access, operation, and display of automobile resources, and status. Computer interaction interface elements such as check boxes, cursors, menus, scrollers, and windows (collectively and commonly referred to as widgets) similarly facilitate the access, capabilities, operation, and display of data and computer hardware and operating system resources, and status. Operation interfaces are commonly called user interfaces. Graphical user interfaces (GUIs) such as the Apple Macintosh Operating System's Aqua and iOS's Cocoa Touch, IBM's OS/2, Google's Android Mobile UI, Microsoft's Windows 2000/2003/3.1/95/98/CE/Millenium/Mobile/NT/XP/Vista/7/8 (i.e., Aero, Metro), Unix's X-Windows (e.g., which may include additional Unix graphic interface libraries and layers such as K Desktop Environment (KDE), mythTV and GNU Network Object Model Environment (GNOME)), web interface libraries (e.g., ActiveX, AJAX, (D)HTML, FLASH, Java, JavaScript, etc. interface libraries such as, but not limited to, Dojo, jQuery(UI), MooTools, Prototype, script.aculo.us, SWFObject, Yahoo! User Interface, any of which may be used and) provide a baseline and means of accessing and displaying information graphically to users.
  • A user interface component 1317 is a stored program component that is executed by a CPU. The user interface may be a graphic user interface as provided by, with, and/or atop operating systems and/or operating environments such as already discussed. The user interface may allow for the display, execution, interaction, manipulation, and/or operation of program components and/or system facilities through textual and/or graphical facilities. The user interface provides a facility through which users may affect, interact, and/or operate a computer system. A user interface may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the user interface communicates with operating systems, other program components, and/or the like. The user interface may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • Web Browser
  • A Web browser component 1318 is a stored program component that is executed by a CPU. The Web browser may be a hypertext viewing application such as Goofle's (Mobile) Chrome, Microsoft Internet Explorer, Netscape Navigator, Apple's (Mobile) Safari, embedded web browser objects such as through Apple's Cocoa (Touch) object class, and/or the like. Secure Web browsing may be supplied with 128 bit (or greater) encryption by way of HTTPS, SSL, and/or the like. Web browsers allowing for the execution of program components through facilities such as ActiveX, AJAX, (D)HTML, FLASH, Java, JavaScript, web browser plug-in APIs (e.g., Chrome, FireFox, Internet Explorer, Safari Plug-in, and/or the like APIs), and/or the like. Web browsers and like information access tools may be integrated into PDAs, cellular telephones, smartphones, and/or other mobile devices. A Web browser may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the Web browser communicates with information servers, operating systems, integrated program components (e.g., plug-ins), and/or the like; e.g., it may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses. Also, in place of a Web browser and information server, a combined application may be developed to perform similar operations of both. The combined application would similarly effect the obtaining and the provision of information to users, user agents, and/or the like from the DWE equipped nodes. The combined application may be nugatory on systems employing standard Web browsers.
  • Mail Server
  • A mail server component 1321 is a stored program component that is executed by a CPU 1303. The mail server may be an Internet mail server such as, but not limited to Apple's Mail Server (3), dovect, sendmail, Microsoft Exchange, and/or the like. The mail server may allow for the execution of program components through facilities such as ASP, ActiveX, (ANSI) (Objective-) C (++), C# and/or .NET, CGI scripts, Java, JavaScript, PERL, PHP, pipes, Python, WebObjects, and/or the like. The mail server may support communications protocols such as, but not limited to: Internet message access protocol (IMAP), Messaging Application Programming Interface (MAPI)/Microsoft Exchange, post office protocol (POP3), simple mail transfer protocol (SMTP), and/or the like. The mail server can route, forward, and process incoming and outgoing mail messages that have been sent, relayed and/or otherwise traversing through and/or to the DWE.
  • Access to the DWE mail may be achieved through a number of APIs offered by the individual Web server components and/or the operating system.
  • Also, a mail server may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, information, and/or responses.
  • Mail Client
  • A mail client component 1322 is a stored program component that is executed by a CPU 1303. The mail client may be a mail viewing application such as Apple (Mobile) Mail, Microsoft Entourage, Microsoft Outlook, Microsoft Outlook Express, Mozilla, Thunderbird, and/or the like. Mail clients may support a number of transfer protocols, such as: IMAP, Microsoft Exchange, POP3, SMTP, and/or the like. A mail client may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the mail client communicates with mail servers, operating systems, other mail clients, and/or the like; e.g., it may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, information, and/or responses. Generally, the mail client provides a facility to compose and transmit electronic mail messages.
  • Cryptographic Server
  • A cryptographic server component 1320 is a stored program component that is executed by a CPU 1303, cryptographic processor 1326, cryptographic processor interface 1327, cryptographic processor device 1328, and/or the like. Cryptographic processor interfaces will allow for expedition of encryption and/or decryption requests by the cryptographic component; however, the cryptographic component, alternatively, may run on a CPU. The cryptographic component allows for the encryption and/or decryption of provided data. The cryptographic component allows for both symmetric and asymmetric (e.g., Pretty Good Protection (PGP)) encryption and/or decryption. The cryptographic component may employ cryptographic techniques such as, but not limited to: digital certificates (e.g., X.509 authentication framework), digital signatures, dual signatures, enveloping, password access protection, public key management, and/or the like. The cryptographic component will facilitate numerous (encryption and/or decryption) security protocols such as, but not limited to: checksum, Data Encryption Standard (DES), Elliptical Curve Encryption (ECC), International Data Encryption Algorithm (IDEA), Message Digest 5 (MD5, which is a one way hash operation), passwords, Rivest Cipher (RC5), Rijndael, RSA (which is an Internet encryption and authentication system that uses an algorithm developed in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman), Secure Hash Algorithm (SHA), Secure Socket Layer (SSL), Secure Hypertext Transfer Protocol (HTTPS), and/or the like. Employing such encryption security protocols, the DWE may encrypt all incoming and/or outgoing communications and may serve as node within a virtual private network (VPN) with a wider communications network. The cryptographic component facilitates the process of “security authorization” whereby access to a resource is inhibited by a security protocol wherein the cryptographic component effects authorized access to the secured resource. In addition, the cryptographic component may provide unique identifiers of content, e.g., employing and MD5 hash to obtain a unique signature for an digital audio file. A cryptographic component may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. The cryptographic component supports encryption schemes allowing for the secure transmission of information across a communications network to enable the DWE component to engage in secure transactions if so desired. The cryptographic component facilitates the secure accessing of resources on the DWE and facilitates the access of secured resources on remote systems; i.e., it may act as a client and/or server of secured resources. Most frequently, the cryptographic component communicates with information servers, operating systems, other program components, and/or the like. The cryptographic component may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • The DWE Database
  • The DWE database component 1319 may be embodied in a database and its stored data. The database is a stored program component, which is executed by the CPU; the stored program component portion configuring the CPU to process the stored data. The database may be any of a number of fault tolerant, relational, scalable, secure databases, such as DB2, MySQL, Oracle, Sybase, and/or the like. Relational databases are an extension of a flat file. Relational databases consist of a series of related tables. The tables are interconnected via a key field. Use of the key field allows the combination of the tables by indexing against the key field; i.e., the key fields act as dimensional pivot points for combining information from various tables. Relationships generally identify links maintained between tables by matching primary keys. Primary keys represent fields that uniquely identify the rows of a table in a relational database. More precisely, they uniquely identify rows of a table on the “one” side of a one-to-many relationship.
  • Alternatively, the DWE database may be implemented using various standard data-structures, such as an array, hash, (linked) list, struct, structured text file (e.g., XML), table, and/or the like. Such data-structures may be stored in memory and/or in (structured) files. In another alternative, an object-oriented database may be used, such as Frontier, ObjectStore, Poet, Zope, and/or the like. Object databases can include a number of object collections that are grouped and/or linked together by common attributes; they may be related to other object collections by some common attributes. Object-oriented databases perform similarly to relational databases with the exception that objects are not just pieces of data but may have other types of capabilities encapsulated within a given object. If the DWE database is implemented as a data-structure, the use of the DWE database 1319 may be integrated into another component such as the DWE component 1335. Also, the database may be implemented as a mix of data structures, objects, and relational structures. Databases may be consolidated and/or distributed in countless variations through standard data processing techniques. Portions of databases, e.g., tables, may be exported and/or imported and thus decentralized and/or integrated.
  • In one embodiment, the database component 1319 includes several tables 1319 a-l. A Users table 1319 a may include fields such as, but not limited to: user_ID, firs_t name, last_name, middle_name, suffix, prefix, device_ID_list, device_name_list, device_typelist, hardware_configuration_list, software_apps_list, device_MAC_list, device_preferences_list, and/or the like. The Users table may support and/or track multiple entity accounts on a DWE. A Clients table 1319 b may include fields such as, but not limited to: device_ID_list, device_name_list, device_typelist, hardware_configuration_list, software_apps_list, device_IP_list, device_MAC_list, device_preferences_list, and/or the like. A Objects table 1319 c may include fields such as, but not limited to: size_pixels, resolution, scaling, x_position, y_position, height, width, shadow_flag, 3D_effect_flag, alpha, brightness, contrast, saturation, gamma, transparency, overlap, boundary_margin, rotation_angle, revolution_angle, and/or the like. An Apps table 1319 d may include fields such as, but not limited to: app_name, app_id, app_version, app_software_requirements_list, app_hardware_requirements_list, and/or the like. A Gestures table 1319 e may include fields such as, but not limited to: gesture_name, gesture_type, assoc_code_module, num_users, num_inputs, velocity_threshold_list, acceleration_threshold_list, pressure_threshold_list, and/or the like. A Physics Models table 1319 f may include fields such as, but not limited to: acceleration, velocity, direction_x, direction_y, orientation_theta, orientation_phi, object_mass, friction_coefficient_x, friction_coefficient_y, friction_coefficient_theta, friction_coefficient_phi, object_elasticity, restitution_percent, terminal_velocity, center_of_mass, moment_inertia, relativistic_flag, newtonian_flag, collision_type, dissipation_factor, and/or the like. A Viewports table 1319 g may include fields such as, but not limited to: user_id, client_id, viewport_shape, viewport_x, viewport_y, viewport_size_list, and/or the like. A Whiteboards table 1319 h may include fields such as, but not limited to: whiteboard_id, whiteboard_name, whiteboard_team_list, whiteboard_directory, and/or the like. An Object Contexts table 1319 i may include fields such as, but not limited to: object_id, object_type, system_settings_flag, object_menu_XML, and/or the like. A System Contexts table 1319 j may include fields such as, but not limited to: object_type, system_settings_flag, system_menu_XML, and/or the like. A Remote Window Contents table 1319 k may include fields such as, but not limited to: window_id, window_link, window_refresh_trigger, and/or the like. A Market Data table 13191 may include fields such as, but not limited to: market_data_feed_ID, asset_ID, asset_symbol, asset_name, spot_price, bid_price, ask_price, and/or the like; in one embodiment, the market data table is populated through a market data feed (e.g., Bloomberg's PhatPipe, Dun & Bradstreet, Reuter's Tib, Triarch, etc.), for example, through Microsoft's Active Template Library and Dealing Object Technology's real-time toolkit Rtt.Multi.
  • In one embodiment, the DWE database may interact with other database systems. For example, employing a distributed database system, queries and data access by search DWE component may treat the combination of the DWE database, an integrated data security layer database as a single database entity.
  • In one embodiment, user programs may contain various user interface primitives, which may serve to update the DWE. Also, various accounts may require custom database tables depending upon the environments and the types of clients the DWE may need to serve. It should be noted that any unique fields may be designated as a key field throughout. In an alternative embodiment, these tables have been decentralized into their own databases and their respective database controllers (i.e., individual database controllers for each of the above tables). Employing standard data processing techniques, one may further distribute the databases over several computer systemizations and/or storage devices. Similarly, configurations of the decentralized database controllers may be varied by consolidating and/or distributing the various database components 1319 a-l. The DWE may be configured to keep track of various settings, inputs, and parameters via database controllers.
  • The DWE database may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the DWE database communicates with the DWE component, other program components, and/or the like. The database may contain, retain, and provide information regarding other nodes and data.
  • The DWEs
  • The DWE component 1335 is a stored program component that is executed by a CPU. In one embodiment, the DWE component incorporates any and/or all combinations of the aspects of the DWE discussed in the previous figures. As such, the DWE affects accessing, obtaining and the provision of information, services, transactions, and/or the like across various communications networks. The features and embodiments of the DWE discussed herein increase network efficiency by reducing data transfer requirements the use of more efficient data structures and mechanisms for their transfer and storage. As a consequence, more data may be transferred in less time, and latencies with regard to transactions, are also reduced. In many cases, such reduction in storage, transfer time, bandwidth requirements, latencies, etc., will reduce the capacity and structural infrastructure requirements to support the DWE's features and facilities, and in many cases reduce the costs, energy consumption/requirements, and extend the life of DWE's underlying infrastructure; this has the added benefit of making the DWE more reliable. Similarly, many of the features and mechanisms are designed to be easier for users to use and access, thereby broadening the audience that may enjoy/employ and exploit the feature sets of the DWE; such ease of use also helps to increase the reliability of the DWE. In addition, the feature sets include heightened security as noted via the Cryptographic components 1320, 1326, 1328 and throughout, making access to the features and data more reliable and secure.
  • The DWE component may transform user multi-element touchscreen gestures via DWE components into updated digital collaboration whiteboard objects, and/or the like and use of the DWE. In one embodiment, the DWE component 1335 takes inputs (e.g., collaborate request input 211, authentication response 215, tile objects data 220, whiteboard input 611, user whiteboard session object 616, user instruction lookup response 619, tile objects data 622, affected clients data 627, user input raw data 1001, object-specified context instructions 1014, system context interpretation instructions 1013, and/or the like) etc., and transforms the inputs via various components (e.g., WCSI 1241, CVS 1242, VCG 1243, UCW 1244, UGI 1245, UWE 1246; and/or the like), into outputs (e.g., collaborator acknowledgment 216, user whiteboard session object 222, whiteboard session details 224, updated tile objects data 630, updated user whiteboard session details 631-632 a-c, user gesture identifier 1016, and/or the like).
  • The DWE component enabling access of information between nodes may be developed by employing standard development tools and languages such as, but not limited to: Apache components, Assembly, ActiveX, binary executables, (ANSI) (Objective-) C (++), C# and/or .NET, database adapters, CGI scripts, Java, JavaScript, mapping tools, procedural and object oriented development tools, PERL, PHP, Python, shell scripts, SQL commands, web application server extensions, web development environments and libraries (e.g., Microsoft's ActiveX; Adobe AIR, FLEX & FLASH; AJAX; (D)HTML; Dojo, Java; JavaScript; jQuery(UI); MooTools; Prototype; script.aculo.us; Simple Object Access Protocol (SOAP); SWFObject; Yahoo! User Interface; and/or the like), WebObjects, and/or the like. In one embodiment, the DWE server employs a cryptographic server to encrypt and decrypt communications. The DWE component may communicate to and/or with other components in a component collection, including itself, and/or facilities of the like. Most frequently, the DWE component communicates with the DWE database, operating systems, other program components, and/or the like. The DWE may contain, communicate, generate, obtain, and/or provide program component, system, user, and/or data communications, requests, and/or responses.
  • Distributed DWEs
  • The structure and/or operation of any of the DWE node controller components may be combined, consolidated, and/or distributed in any number of ways to facilitate development and/or deployment. Similarly, the component collection may be combined in any number of ways to facilitate deployment and/or development. To accomplish this, one may integrate the components into a common code base or in a facility that can dynamically load the components on demand in an integrated fashion.
  • The component collection may be consolidated and/or distributed in countless variations through standard data processing and/or development techniques. Multiple instances of any one of the program components in the program component collection may be instantiated on a single node, and/or across numerous nodes to improve performance through load-balancing and/or data-processing techniques. Furthermore, single instances may also be distributed across multiple controllers and/or storage devices; e.g., databases. All program component instances and controllers working in concert may do so through standard data processing communication techniques.
  • The configuration of the DWE controller will depend on the context of system deployment. Factors such as, but not limited to, the budget, capacity, location, and/or use of the underlying hardware resources may affect deployment requirements and configuration. Regardless of if the configuration results in more consolidated and/or integrated program components, results in a more distributed series of program components, and/or results in some combination between a consolidated and distributed configuration, data may be communicated, obtained, and/or provided. Instances of components consolidated into a common code base from the program component collection may communicate, obtain, and/or provide data. This may be accomplished through intra-application data processing communication techniques such as, but not limited to: data referencing (e.g., pointers), internal messaging, object instance variable communication, shared memory space, variable passing, and/or the like.
  • If component collection components are discrete, separate, and/or external to one another, then communicating, obtaining, and/or providing data with and/or to other components may be accomplished through inter-application data processing communication techniques such as, but not limited to: Application Program Interfaces (API) information passage; (distributed) Component Object Model ((D)COM), (Distributed) Object Linking and Embedding ((D)OLE), and/or the like), Common Object Request Broker Architecture (CORBA), Jini local and remote application program interfaces, JavaScript Object Notation (JSON), Remote Method Invocation (RMI), SOAP, process pipes, shared files, and/or the like. Messages sent between discrete component components for inter-application communication or within memory spaces of a singular component for intra-application communication may be facilitated through the creation and parsing of a grammar. A grammar may be developed by using development tools such as lex, yacc, XML, and/or the like, which allow for grammar generation and parsing capabilities, which in turn may form the basis of communication messages within and between components.
  • For example, a grammar may be arranged to recognize the tokens of an HTTP post command, e.g.:
      • w3c -post http:// . . . Value1
        where Value1 is discerned as being a parameter because “http://” is part of the grammar syntax, and what follows is considered part of the post value. Similarly, with such a grammar, a variable “Value1” may be inserted into an “http://” post command and then sent. The grammar syntax itself may be presented as structured data that is interpreted and/or otherwise used to generate the parsing mechanism (e.g., a syntax description text file as processed by lex, yacc, etc.). Also, once the parsing mechanism is generated and/or instantiated, it itself may process and/or parse structured data such as, but not limited to: character (e.g., tab) delineated text, HTML, structured text streams, XML, and/or the like structured data. In another embodiment, inter-application data processing protocols themselves may have integrated and/or readily available parsers (e.g., JSON, SOAP, and/or like parsers) that may be employed to parse (e.g., communications) data. Further, the parsing grammar may be used beyond message parsing, but may also be used to parse: databases, data collections, data stores, structured data, and/or the like. Again, the desired configuration will depend upon the context, environment, and requirements of system deployment.
  • For example, in some implementations, the DWE controller may be executing a PHP script implementing a Secure Sockets Layer (“SSL”) socket server via the information server, which listens to incoming communications on a server port to which a client may send data, e.g., data encoded in JSON format. Upon identifying an incoming communication, the PHP script may read the incoming message from the client device, parse the received JSON-encoded text data to extract information from the JSON-encoded text data into PHP script variables, and store the data (e.g., client identifying information, etc.) and/or extracted information in a relational database accessible using the Structured Query Language (“SQL”). An exemplary listing, written substantially in the form of PHP/SQL commands, to accept JSON-encoded input data via SSL, parse the data to extract variables, and store it in a database, is provided below:
  • <?PHP
    header(‘Content-Type: text/plain’);
    // set ip address and port to listen to for incoming data
    $address = ‘192.168.0.100’;
    $port = 255;
    // create a server-side SSL socket, listen for/accept incoming
    communication
    $sock = socket_create(AF_INET, SOCK_STREAM, 0);
    socket_bind($sock, $address, $port) or die(‘Could not bind to address’);
    socket_listen($sock);
    $client = socket_accept($sock);
    // read input data from client device in 1024 byte blocks until end
    of message
    do {
      $input = “”;
      $input = socket_read($client, 1024);
      $data .= $input;
    } while($input != “”);
    // parse data to extract variables
    $obj = json_decode($data, true);
    // store input data in a database
    mysql_connect(″201.408.185.132″,$DBserver,$password); // access
    database server
    mysql_select(″CLIENT_DB.SQL″); // select database to append
    mysql_query(“INSERT INTO UserTable (transmission)
    VALUES ($data)”); // add data to UserTable table in a CLIENT database
    mysql_close(″CLIENT_DB.SQL″); // close connection to database
    ?>
  • Also, the following provide example embodiments of SOAP and other parser implementations, all of which are expressly incorporated by reference herein:
  • [1]http://www.xav.com/perl/site/lib/SOAP/Parser.html
    [2]http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=/
      com.ibm.IBMDI.doc/referenceguide295.htm
    [3]http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=/
      com.ibm.IBMDI.doc/referenceguide259.htm
  • FIGS. 14-65 are described with respect to an embodiment referred to herein as “Thoughtstream”. There are two modes of operation in this embodiment: Record and Retrieve. If the system is not retrieving then it is recording, even if it is recording nothing. All information, media and ideas are stored in perpetuity.
  • FIG. 14 is a drawing of a calculator. Normally, a calculator is loaded into your computer's memory, you interact with it by clicking buttons, and then you might copy and paste the results into a document. When you are done, you close the application, and the application is removed from memory. In the present embodiment, however, the calculator is never unloaded from memory. The functions, the button presses, the calculations, are all saved with a time index. Even if the calculator is “closed”, or removed from the system, the application and all of its history persist in the cloud.
  • Imagine if you never had to worry about saving files from your apps. This way of thinking was not possible until very recently. Hard drive space is affordable now, and has been for a few years, but most of the operating system and user interface paradigms were codified twenty or thirty years ago. ThoughtStream is a cloud-based application platform. The SuperWall is a ThoughtStream Application viewer, and is also a special, turbocharged ThoughtStream Application. ThoughtStream Applications are Inherently multi-user, persistent, and device independent.
  • Let's look at the old metaphor for a moment. Historically, the arrangement of information on a computer desktop was never considered meaningful. That's because the desktop (literal or figurative) isn't a place for storage. It's a workplace. It is where you tinker and toil. You then save the result of your labor into a filing cabinet: That's why they are called “files”.
  • The web is about persistent, shared information. Google Docs revolutionized office productivity by giving people the ability to externalize information in a way that had never been done before. Google Does doesn't store the history of a document, but it does store the STATE of a document. This is a perfect expression of the spirit and promise of the web applied to the idea of applications instead of “web pages”.
  • ThoughtStream and the SuperWall extend this metaphor not slightly, but extremely, to more than just a text-editor or a drawing tool. It extends this metaphor into applications, the desktop, and the wall, not only with information itself, as discreet elements, but with the arrangement of information: the aggregation of ideas from many people, across space and time. It is much more than an infinite whiteboard. It is a new way of thinking. It is the promise of cloud computing.
  • So, what does this mean for the Super Wall? Well, for starters, we need to stop thinking of the Super Wall as a drawing tool. It is an aggregation tool. It's a place for ideas to live and breathe. It is a platform for the consideration, presentation, and arrangement of persistent, shared information, across space and time. It just so happens that you can write on it. It's not just a new desktop. It's a SuperWall.
  • The SuperWall has more in common with an operating system than a drawing tool. It's more like Windows than Word. It's more like a browser than a website.
  • UI Design Principles
  • Be consistent, assume that people do not know what to do, and almost everything can be accomplished with single taps or swipes. The challenge with ThoughtStream Applications is determining what is content (saved) and what is user-interface (not-saved).
  • FIG. 15 illustrates a traditional Desktop Application User Interface Paradigm: Windows are containers for application content. The operating system provides methods to manipulate, close, open, resize, etc., the window containers themselves. This technique works great for Mouse-based interfaces on desktops or laptops, but not so well for large-format touch-based user interfaces, particularly where the content window can be virtually any size. For ergonomic (and aesthetic) reasons, we don't always want to see the menu or title bar. On a large display where you may have hundreds or thousands of objects at once, it will get very busy and confusing.
  • As illustrated in FIG. 15, for the most part, we don't want UI elements to go above a person's eye-line. The average height of American adult's eyes is around five feet. Conversely, we don't need UI objects to go below a certain point on the wall either, or else they will risk going unnoticed or become difficult to access. We don't want to make people bend over.
  • As illustrated in FIG. 17, we can solve this problem by decoupling the auxiliary menu options from the application contents. These basic elements can be used together in a number of powerful ways which solve most of the requirements posed by the ThoughtStream operating environment.
  • What we end up with are Objects and Toolbars (see FIG. 18). (There is a third element called a Region, which will be explained later.) These basic elements can be used together in a number of powerful ways which solve most of the requirements posed by the ThoughtStream operating environment.
  • As illustrated in FIG. 19, “objects” have no boundaries whatsoever. They can be as large or small as the user wishes, and they can be moved freely throughout the space. “Regions” also have no boundaries but they cannot overlap one another. In any given vertical line, there can only be one Region. This is because humans are vertical-standing creatures. “Toolbars” are strictly limited to the vertical space we call the “Ergonomic Zone”. They are in front of regions and objects and cannot be occluded by other toolbars. Now, no matter what configuration of Objects, we have ergonomically accessible toolbars.
  • Referring to FIG. 20, to support ADA requirements or children, we can temporarily lower the ergonomic boundaries if a user touches somewhere BELOW the ergonomic lone. Since the UI is dynamic and flexible, this will not create a significant problem.
  • As illustrated in FIG. 21, Rule 1 is that in default state, single point swipes cause movement. Swiping the canvas will pan (if there are no Active Regions!).
  • Toolbars
  • A canvas toolbar is what appears when a user touches the canvas: any space that is not occupied by an Object or Region. See FIG. 22.
  • As illustrated in FIG. 23, touching the Cloud button opens up the Cloud Pane. The Canvas Toolbar stays in the same position, but it scales up and transforms into the Cloud Pane. The upper limit of the pane must remain within the upper Ergonomic Boundary. The Cloud Pane allows users to select apps, documents, images, or other objects from a hierarchical display module. This set of objects can also be administered through an on-line component. Perhaps drop box, sugarsync, or another solution. Touching an item once will cause it to appear on the canvas as a new object.
  • Referring to FIG. 24, Toolbars make up the majority of the user interface for Thought Stream. They provide a way to get to virtually all functionality within the system, and they are entirely dynamic entities. The reason for this is that we can create a dynamic user interface that is not cluttered with hundreds of buttons and widgets. We can create a system where only the necessary elements are visible at any given time. Coupled with the Region, virtually any task can be accomplished. Single user devices only need one toolbar, which does not necessarily leave the screen. In fact, it could be designed in a way that is similar to the Mac OS, wherein the toolbar at the top of the screen is there, but it changes depending on which window has been given focus. The paradigm is similar here as well.
  • Because we have decoupled our Toolbars from the rectangular objects they are associated with, we can position the toolbars independently of the Objects they belong to. To wit: we can make sure that Toolbars are within the ergonomic zone. Even if an object is so large that the edges cannot be seen, the toolbars for that object (if it is active I has focus) can float on top and be accessible by the users. See FIG. 25.
  • As illustrated in FIG. 26, Toolbars “travel” Across the SuperSpace. Toolbars are attracted to activity in a linear way, i.e., the amount of force or velocity applied to a toolbar is equal regardless of distance to activity. However, there is also a repellant “inverse magnetism” force at work, which is NON linear. Specifically, −1/d2. This will cause the toolbar to approach activity, but as the user gets closer, the tool bar will move away, thereby not impeding writing or drawing.
  • Regions
  • A region is a combination of three concepts from the Desktop paradigm: Lasso, Personal Workspace (or Device View), and Application Focus. Referring to FIG. 28, Regions are used to do the following: Create a workspace within which to draw or manipulate information; Define an area within which one or more objects can be grouped, moved, arranged, copied, and deleted; and Create an area that is yours, and yours alone to manipulate. This helps when we think about “Undo” in a collaborative environment!
  • There are two different kinds of Regions: Object Regions and Canvas Regions. Object Regions are connected to Objects, while Canvas Regions are separated from objects. Canvas Regions are used to select multiple objects or declare a region of space within which to perform global functions, like drawing, copying, etc.
  • Object Regions have at two modes: Active, and Transform. (All inactive object regions are Transform regions.) Referring to FIG. 29, a Transform Region is designed to allow users to scale and move the Region. Canvas Regions can scale and move indepently of anything else, while Regions which are linked to Objects will affect the Object it belongs to. Touching the corners of a Transform Region will allow the user to scale, and touching the middle will allow the object to be moved. Multitouch gestures like Pinch and Zoom work independently of these handles.
  • There are a number of ways to create Regions. Referring to FIGS. 30 and 31, a user can specify a region. This method requires that the user use two fingers to define a rectangular area by specifying opposing corners. After a Region has been created. the tool bar immediately appears.
  • Activate an Object. Referring to FIG. 32, since all objects are rectangular, they can also benefit from the Region paradigm. Simply touching any object once (not moving it) will activate it. This is virtually the same thing as Focus on desktop PCs. In Thought Stream, we have combined the idea of Focus and Lasso/Selection into one model which provides a powerful and elegant way to deal with a large, multi-user, spatial environment.
  • Begin drawing. Referring to FIG. 33, users with a stylus can simply begin drawing anywhere, at any time. Referring to FIG. 34, if a user chooses Draw from a default Toolbar, a Region will be created automatically.
  • Use the Toolbar. Referring to FIG. 35, the default Toolbar has a very tiny Region that has no size yet, because it has not been given a boundary. Touching the Region Circle will cause a new Region to be created automatically, extending up to the Ergonomic Boundaries.
  • FIGS. 36A and 36B illustrate that a single tap on a clean canvass causes local canvas options to appear.
  • FIG. 37 illustrates that if a user chooses “Draw”, this puts the local UI Context into draw mode. Local UI Contexts (or Regions) cover the Vertical Ergonomic Zone and the Horizontal Device Zone by default. In general, we want to avoid causing users to reach above their eye level and we want to avoid having them bend down.
  • Referring to FIG. 38, the user can draw freely within the UI context, but what happens when they reach an edge? As illustrated in FIG. 39, the SuperSpace can extend a Region on-the-fly to fill adjacent area when needed. Region behavior is one of the most important, and complicated, components of Superspace UI design.
  • Referring to FIG. 40, the tool palate always stays within the ergonomic zone. Also, a Region can be resized and moved just like any other object. It can also be used to cut & copy. Multiple Regions cannot overlap on the same device.
  • Travelling Regions. As illustrated in FIGS. 41, 42 and 43, drawing from one object to another causes the original object region to deactivate. A new Canvas Region is immediately created which allows the drawing to continue not only onto the new object, but also the Canvas, other objects, or even the original object. Continuing to draw off the secondary object and into the canvas area has no effect. The Canvas Region continues to adapt and follow the activity of the hand. A user can single tap any object within a Canvas Region to destroy the Canvas Region and activate the selected Object Region.
  • Closing a Region. As illustrated in FIG. 44, simply pressing the Close button will close the Toolbar, as well as the associated Region. As illustrated in FIG. 45, closing an Object Region will only deactivate the Region. The object itself will remain unchanged. Also, as illustrated in FIG. 46, touching outside of a region will close it also. This will work above and below the region, as it is assumed that this is inside someone's personal space, and it may work within a specified distance from the Region, perhaps six inches.
  • Neighbor Regions. Referring to FIGS. 47 and 48, generally speaking, we do not want multiple Regions to occupy the same space. There must be a good deal of logic written to prevent Regions from overlapping. Also, Regions which are made to “Collide” with each other will cause the Region which has been used longest ago to deactivate. Note that this is only an issue for devices which support multiple regions (and therefore multiple users). Single-User experiences only need one region, which functions a little differently than the ones described in this document.
  • Region Options. Canvas Regions can be used to select and manipulate many objects at once. Some examples of arranging many objects at once can be seen in FIG. 49.
  • As illustrated in FIG. 50, Objects can be arranged or moved as a group. As illustrated in FIG. 51, Objects can be removed as a group.
  • Objects
  • All Objects on the SuperWall are ThoughtStream applications. The are not files. They are ideas. Ideas are not static. They live and breathe.
  • Standard Objects
  • The primary object type is the super image.
  • This is an unlimited resolution object which can
  • be a photo, illustration, or other bitmap object
  • that has either been uploaded or created within
  • the ThoughtStream.
  • Different objects have different toolbars depending on what they are. For instance, the standard object is primarily for display, drawing, or annotating, so its tools are designed to support that feature. Colors, eraser tools, and other brush options are available. In Passive mode (illustrated in FIG. 52), a single touch-swipe causes movement. A multi touch-swipe (pawing) also causes movement. A single tap activates/gives focus, and Pinch and Zoom cause scaling/translation. In Active mode (illustrated in FIG. 53), a single touch-swipe causes object-specific activity. A multi touch-swipe (pawing) is object-specific. Single tapping may close the region and immediately create a new Canvas Region.
  • FIG. 54 illustrates a multipage document. FIG. 55 illustrates a video player.
  • Third Party Objects. A major feature of the Thought Stream system is that it can be extended by third parties. The modular Object Model will have an API which allows developers to create their own. This could be very similar to the Widget system in Mac OSX. FIG. 56 illustrates a third party object in the passive mode, and FIG. 58 illustrates it in the active mode. FIG. 59 illustrates a dumb window in the passive mode and FIG. 60 illustrates it in the active mode.
  • All objects are—essentially, and actually—applications. Even if they appear to “just be an image”. It is actually an app in the cloud that stores state and is built on the ThoughtStream application framework. The entire crux of ThoughtStream then rests on the design of the application model. How do applications communicate with client devices? What protocol is used? What does the user interface model look like? These are the real challenges.
  • ThoughtStream SDK:
  • Any application can be built to use the ThoughtStream engine, but there are guidelines for creating Idiomatic, ThoughtStream-compliant apps.
      • Abstract User Interface Toolkit: Cairo Based? HTML5 renderer; OpenGI Renderer; Platform Independent. Special affordance for bitmap graphics? VNC? Tiles?
      • Content/Data Agnostic.
      • Entire Object Model is connected to ThoughtStream Backend.
      • Bindings for multiple languages.
  • All ThoughtStream apps run in the cloud, but may have local app-logic? What does the division look like?
  • Object model is persistently linked to backend.
      • Persistent state
      • Inherently multi-user
      • Uses sockets
  • Other Possibilities include Skype, Video Conference, Audio Recorder, Video Player, RSS Viewer, Stock Ticker, Twitter Viewer, PDF Viewer.
  • Summary: Toolbars are linked to Regions, Object Regions are linked to Objects, and Canvas Regions are linked to nothing!
  • Global Mode
  • Accessing Global Mode should not be possible if there are any Active Regions. Global Mode gives a user the ability to recall saved locations, zoom in and out, go back in time, switch sessions, and other “Meta” functions. If there are any Active Regions, that means someone is still working.
  • As illustrated in FIG. 61, Global mode contains a powerful set of options which allow the user to administer high level meta features. All of the features possible with the Cloud Pane (adding and managing Apps & Assets) are available here. Primarily though this is a place for large-scale wayfinding, setting bookmarks (Pins), as well as scrubbing back through time. Only one global mode may be active on a device at a given time. Global mode does not run as a “window” on a client device; it completely replaces the default view for a session, which permits advanced rendering of the smaller windows contained within. Users can pan around the super space just as they would in local mode. Double tapping will return them to local mode. Users can still create regions here, but there is no drawing, and no “Active Mode” or Focus for Objects. The regions here are only for organizing, moving, and erasing objects.
  • As illustrated in FIG. 61, the history browser allows a user to return to any point in time. Historical activity is shown on a calendar view and also a scrollable, zoomable timeline which gives a “bird's eye” view of activity over the course of many days, months, or years.
  • Canvas
  • Drawing is not permitted when zoomed out (FIG. 62). But what we can do is automatically zoom in if drawing is attempted (FIG. 63).
  • “Pinch and Zoom” is one of the most common UI paradigms for touch screens. All devices should support this gesture. One of the major problems with global actions like Pan and Zoom is that we don't want one user to interfere with another user's space. One way around this is to not allow Pan and Zoom if there are any active Regions on the device.
  • Additional DWE Embodiments: Walls with Combination of Touch & Non-Touch Cells
  • As illustrated in FIG. 57, for the most part, we don't want UI elements to go above a person's eye-line. The average height of American adult's eyes is around five feet. Conversely, we don't need UI objects to go below a certain point on the wall either, or else they will risk going unnoticed or become difficult to access. We don't want to make people bend over. Because we have decoupled our Toolbars from the rectangular objects they are associated with, we can position the toolbars independently of the Objects they belong to. To wit: we can make sure that Toolbars are within the ergonomic zone. Even if an object is so large that the edges cannot be seen, the toolbars for that object (if it is active I has focus) can float on top and be accessible by the users.
  • The use of touch cells in ergonomic zone (see FIG. 57) is most useful. However, the floor to ceiling viewing as a single screen is also critical. Currently this requires full array of touch screens 3 high (7′) in landscape and 2 high (8′) in portrait. The upper and lower portions as illustrated represent difficult to reach, hence the ergonomic zones are likely most common areas requiring touch. To adjust for this we propose using touch cells primarily in the Ergonomic Zone. Roughly from Eye level to Waist level. This equates to center row of cells in 3 high landscape design. See illustration in FIG. 64. The outcome retains most of the wall's performance at a significantly lower cost. This both increases market penetration and differentiates the Thought Streaming wall solution. As is apparent from these illustrations we also intend to limit tool bar locations to within the Ergonomic Zone, further differentiating Thought Streaming walls. We seek patent protection on both of the ideas, Touch Cells in Ergonomic zone and Toolbars that float within same zone. In FIG. 64, the dashed line illustrates the touch area limited potentially to only the Ergonomic Zone.
  • Non-Limiting Example Embodiments Highlighting Numerous Further Advantageous Aspects
  • 1. A digital collaborative whiteboarding processor-implemented method embodiment, comprising:
  • obtaining user whiteboard input from a client device of a user participating in a digital collaborative whiteboarding session;
  • parsing the user whiteboard input to determine user instructions;
  • identifying a user instruction based on parsing the user whiteboard input;
  • modifying an attribute of the digital collaborative whiteboarding session according to the identified user instruction;
  • generating updated client viewport content for the client device; and
  • providing the updated client viewport content to the client device.
  • 2. The method of embodiment 1, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • 3. The method of embodiment 1, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • 4. The method of embodiment 3, further comprising:
  • determining that client viewport content of a second client device should be modified because of modifying the tile object included in the digital whiteboard;
  • generating updated client viewport content for the second client device after determining that the client viewport content of the second client device should be modified; and
  • providing, to the second client device, the updated client viewport content for the second client device.
  • 5. The method of embodiment 1, wherein the user whiteboard input includes data on a touchscreen gesture performed by the user.
  • 6. The method of embodiment 5, wherein the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • 7. The method of embodiment 1, wherein the user instructions include client viewport modification instructions and tile object modification instructions.
  • 8. A digital collaborative whiteboarding system embodiment, comprising:
  • a processor; and
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
      • obtain user whiteboard input from a client device of a user participating in a digital collaborative whiteboarding session;
      • parse the user whiteboard input to determine user instructions;
      • identify a user instruction based on parsing the user whiteboard input;
      • modify an attribute of the digital collaborative whiteboarding session according to the identified user instruction;
      • generate updated client viewport content for the client device; and
      • provide the updated client viewport content to the client device.
  • 9. The system of embodiment 8, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • 10. The system of embodiment 8, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • 11. The system of embodiment 10, the memory further storing instructions to:
  • determine that client viewport content of a second client device should be modified because of modifying the tile object included in the digital whiteboard;
  • generate updated client viewport content for the second client device after determining that the client viewport content of the second client device should be modified; and
  • provide, to the second client device, the updated client viewport content for the second client device.
  • 12. The system of embodiment 8, wherein the user whiteboard input includes data on a touchscreen gesture performed by the user.
  • 13. The system of embodiment 12, wherein the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • 14. The method of embodiment 8, wherein the user instructions include client viewport modification instructions and tile object modification instructions.
  • 15. A processor-readable tangible medium embodiment storing processor-executable digital collaborative whiteboarding instructions to:
  • obtain user whiteboard input from a client device of a user participating in a digital collaborative whiteboarding session;
  • parse the user whiteboard input to determine user instructions;
  • identify a user instruction based on parsing the user whiteboard input;
  • modify an attribute of the digital collaborative whiteboarding session according to the identified user instruction;
  • generate updated client viewport content for the client device; and
  • provide the updated client viewport content to the client device.
  • 16. The medium of embodiment 15, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a client viewport specification associated with the client device.
  • 17. The medium of embodiment 15, wherein modifying the attribute of the digital collaborative whiteboarding session includes modifying a tile object included in a digital whiteboard that is part of the digital collaborative whiteboarding session.
  • 18. The medium of embodiment 17, further storing instructions to:
  • determine that client viewport content of a second client device should be modified because of modifying the tile object included in the digital whiteboard;
  • generate updated client viewport content for the second client device after determining that the client viewport content of the second client device should be modified; and
  • provide, to the second client device, the updated client viewport content for the second client device.
  • 19. The medium of embodiment 15, wherein the user whiteboard input includes data on a touchscreen gesture performed by the user.
  • 20. The medium of embodiment 19, wherein the client device is one of: a multi-user touchscreen device; and a mobile touchscreen-enabled device.
  • 21. The medium of embodiment 15, wherein the user instructions include client viewport modification instructions and tile object modification instructions.
  • 22. A chord-based gesturing processor-implemented method embodiment, comprising:
  • obtaining user touchscreen data input into a touchscreen device;
  • identifying a chord from the user touchscreen data input, the identification of the chord including a number of elements included in the chord;
  • determining, via a processor, for each element included in the chord, at least:
      • a touchscreen pressure;
      • a time parameter; and
      • a motion parameter;
  • identifying a gesture context for the chord based on a spatial coordinate for at least one chord element of the chord;
  • generating a database lookup query for a user intention corresponding to the chord using the identified gesture context, the touchscreen pressure, the time parameters and the motion parameter;
  • querying a database using the generated database lookup query; and
  • obtaining the user intention corresponding to the chord based on querying the database.
  • 23. The method of embodiment 22, wherein at least one of the elements included in the chord is a stylus touch.
  • 24. The method of embodiment 22, wherein the time parameter is a hold time.
  • 25. The method of embodiment 22, wherein the motion parameters is a direction vector.
  • 26. The method of embodiment 22, wherein the touchscreen device is one of: a multi-user touchscreen device; and a mobile device.
  • 27. The method of embodiment 22, wherein the gesture context for the chord includes a touchscreen object located at the spatial coordinate for the at least one chord element of the chord.
  • 28. The method of embodiment 22, wherein the user intention includes an intention to modify an object displayed on the touchscreen device.
  • 29. A chord-based gesturing apparatus embodiment, comprising:
  • a processor; and
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
  • obtain user touchscreen data input;
  • identify a chord from the user touchscreen data input, the identification of the chord including a number of elements included in the chord;
  • determine, for each element included in the chord, at least:
      • a touchscreen pressure;
      • a time parameter; and
      • a motion parameter;
  • identify a gesture context for the chord based on a spatial coordinate for at least one chord element of the chord;
  • generate a database lookup query for a user intention corresponding to the chord using the identified gesture context, the touchscreen pressure, the time parameters and the motion parameter;
  • query a database using the generated database lookup query; and
  • obtain the user intention corresponding to the chord based on querying the database.
  • 30. The apparatus of embodiment 29, wherein at least one of the elements included in the chord is a stylus touch.
  • 31. The apparatus of embodiment 29, wherein the time parameter is a hold time.
  • 32. The apparatus of embodiment 29, wherein the motion parameters is a direction vector.
  • 33. The apparatus of embodiment 29, wherein the apparatus is one of: a multi-user touchscreen device; and a mobile device.
  • 34. The apparatus of embodiment 29, wherein the gesture context for the chord includes a touchscreen object located at the spatial coordinate for the at least one chord element of the chord.
  • 35. The apparatus of embodiment 29, wherein the user intention includes an intention to modify an object displayed on the apparatus.
  • 36. A processor-readable tangible medium embodiment storing processor-executable chord-based gesturing instructions to:
  • obtain user touchscreen data input into a touchscreen device;
  • identify a chord from the user touchscreen data input, the identification of the chord including a number of elements included in the chord;
  • determine, for each element included in the chord, at least:
      • a touchscreen pressure;
      • a time parameter; and
      • a motion parameter;
  • identify a gesture context for the chord based on a spatial coordinate for at least one chord element of the chord;
  • generate a database lookup query for a user intention corresponding to the chord using the identified gesture context, the touchscreen pressure, the time parameters and the motion parameter;
  • query a database using the generated database lookup query; and
  • obtain the user intention corresponding to the chord based on querying the database.
  • 37. The medium of embodiment 36, wherein at least one of the elements included in the chord is a stylus touch.
  • 38. The medium of embodiment 36, wherein the time parameter is a hold time.
  • 39. The medium of embodiment 36, wherein the motion parameters is a direction vector.
  • 40. The medium of embodiment 36, wherein the touchscreen device is one of: a multi-user touchscreen device; and a mobile device.
  • 41. The medium of embodiment 36, wherein the gesture context for the chord includes a touchscreen object located at the spatial coordinate for the at least one chord element of the chord.
  • 42. The medium of embodiment 36, wherein the user intention includes an intention to modify an object displayed on the apparatus.
  • 43. A digital whiteboard file system processor-implemented method embodiment, comprising:
  • obtaining a whiteboard input from a participant in a collaborative digital whiteboarding session;
  • determining a whiteboarding instruction by parsing the whiteboard input;
  • modifying, via a processor, a file system representative of the digital collaborative whiteboarding session according to the identified whiteboard instructions; and
  • providing an indication of modification of the file system.
  • 44. The method of embodiment 43, wherein the collaborative digital whiteboarding session is modifiable by a plurality of participants.
  • 45. The method of embodiment 43, further comprising:
  • generating a timestamp associated with the whiteboarding instruction; and
  • storing the timestamp in the modified file system.
  • 46. The method of embodiment 43, wherein the file system comprises a plurality of directories, each directory representing a tile in a digital whiteboard; and
  • wherein each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • 47. The method of embodiment 46, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • 48. The method of embodiment 46, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • 49. The method of embodiment 46, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • 50. The method of embodiment 46, wherein one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • 51. The method of embodiment 46, wherein the tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • 52. The method of embodiment 46, wherein the tile content data structure includes metadata associated with the stored tile content.
  • 53. A digital whiteboard system embodiment, comprising:
  • a processor; and
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
      • obtain a whiteboard input from a participant in a collaborative digital whiteboarding session;
      • determine a whiteboarding instruction by parsing the whiteboard input;
      • modify, via the processor, a file system representative of the digital collaborative whiteboarding session according to the identified whiteboard instructions; and
      • provide an indication of modification of the file system.
  • 54. The system of embodiment 53, wherein the collaborative digital whiteboarding session is modifiable by a plurality of participants.
  • 55. The system of embodiment 53, the memory further stores instructions to:
  • generate a timestamp associated with the whiteboarding instruction; and
  • store the timestamp in the modified file system.
  • 56. The system of embodiment 53, wherein the file system comprises a plurality of directories, each directory representing a tile in a digital whiteboard; and
  • wherein each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • 57. The system of embodiment 56, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • 58. The system of embodiment 56, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • 59. The system of embodiment 56, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • 60. The system of embodiment 56, wherein one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • 61. The system of embodiment 56, wherein the tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • 62. The system of embodiment 56, wherein the tile content data structure includes metadata associated with the stored tile content.
  • 63. A non-transitory computer-readable medium embodiment storing processor-executable digital whiteboard instructions to:
  • obtain a whiteboard input from a participant in a collaborative digital whiteboarding session;
  • determine a whiteboarding instruction by parsing the whiteboard input;
  • modify, via the processor, a file system representative of the digital collaborative whiteboarding session according to the identified whiteboard instructions; and
  • provide an indication of modification of the file system.
  • 64. The medium of embodiment 63, wherein the collaborative digital whiteboarding session is modifiable by a plurality of participants.
  • 65. The medium of embodiment 63, further storing instructions to:
  • generate a timestamp associated with the whiteboarding instruction; and
  • store the timestamp in the modified file system.
  • 66. The medium of embodiment 63, wherein the file system comprises a plurality of directories, each directory representing a tile in a digital whiteboard; and
  • wherein each directory includes a tile content data structure storing tile content for the tile in the digital whiteboard that the directory represents.
  • 67. The medium of embodiment 66, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures is associated with a unique timestamp.
  • 68. The medium of embodiment 66, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structure is associated with a unique set of user identifications.
  • 69. The medium of embodiment 66, wherein one of the directories includes a plurality of tile content data structures, wherein each of the plurality of tile content data structures represents a layer within the digital whiteboard.
  • 70. The medium of embodiment 66, wherein one of the directories includes a sub-directory storing a sub-tile content data structure including sub-tile content for a sub-tile in the digital whiteboard that the sub-directory represents.
  • 71. The medium of embodiment 66, wherein the tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • 72. The medium of embodiment 66, wherein the tile content data structure includes metadata associated with the stored tile content.
  • 73. A digital whiteboard viewer processor-implemented method embodiment, comprising:
  • obtaining a user gesture input via a touchscreen interface of a device;
  • identifying a user whiteboarding instruction based on the obtained user gesture input;
  • generating a user whiteboard message including the user whiteboarding instruction for
  • providing into a collaborative digital whiteboarding session;
  • providing the user whiteboard message to a digital whiteboard server;
  • obtaining client viewport data in response to the user whiteboard message;
  • rendering, via the device, a client viewport screen based on the client viewport data; and
  • displaying the rendered client viewport screen via the touchscreen interface of the device.
  • 74. The method of embodiment 73, wherein the device is a mobile device.
  • 75. The method of embodiment 73, wherein the user whiteboarding instruction is a client viewport modification instruction.
  • 76. The method of embodiment 73, wherein the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • 77. The method of embodiment 73, wherein the user whiteboarding instructions includes an instruction to display an evolution of tile content displayed within the client viewport screen via the touchscreen interface of the device.
  • 78. The method of embodiment 77, wherein the evolution of the tile content is presented as a video file.
  • 79. The method of embodiment 77, wherein the wherein the tile content includes at least one of:
  • a remote window object;
  • an audio-visual object; and
  • a multi-page document.
  • 80. The method of embodiment 73, wherein the rendered client viewport screen depicts tile content of a portion of a tile included in the digital collaborative whiteboarding session.
  • 81. The method of embodiment 75, wherein the client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • 82. The method of embodiment 75, wherein the rendered client viewport screen is customized automatically to an attribute of the touchscreen interface of the device.
  • 83. A digital whiteboard viewer apparatus embodiment, comprising:
  • a processor; and
  • a memory disposed in communication with the processor and storing processor-executable instructions to:
      • obtain a user gesture input via a touchscreen interface of a device;
      • identify a user whiteboarding instruction based on the obtained user gesture input;
      • generate a user whiteboard message including the user whiteboarding instruction for providing into a collaborative digital whiteboarding session;
      • provide the user whiteboard message to a digital whiteboard server;
      • obtain client viewport data in response to the user whiteboard message;
      • render, via the device, a client viewport screen based on the client viewport data; and
      • display the rendered client viewport screen via the touchscreen interface of the device.
  • 84. The apparatus of embodiment 83, wherein the device is a mobile device.
  • 85. The apparatus of embodiment 83, wherein the user whiteboarding instruction is a client viewport modification instruction.
  • 86. The apparatus of embodiment 83, wherein the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • 87. The apparatus of embodiment 83, wherein the user whiteboarding instructions includes an instruction to display an evolution of tile content displayed within the client viewport screen via the touchscreen interface of the device.
  • 88. The apparatus of embodiment 87, wherein the evolution of the tile content is presented as a video file.
  • 89. The apparatus of embodiment 87, wherein the wherein the tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • 90. The apparatus of embodiment 83, wherein the rendered client viewport screen depicts tile content of a portion of a tile included in the digital collaborative whiteboarding session.
  • 91. The apparatus of embodiment 85, wherein the client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • 92. The apparatus of embodiment 85, wherein the rendered client viewport screen is customized automatically to an attribute of the touchscreen interface of the device.
  • 93. A non-transitory computer-readable medium embodiment storing processor-executable digital whiteboard viewer instructions to:
  • obtain a user gesture input via a touchscreen interface of a device;
  • identify a user whiteboarding instruction based on the obtained user gesture input;
  • generate a user whiteboard message including the user whiteboarding instruction for providing into a collaborative digital whiteboarding session;
  • provide the user whiteboard message to a digital whiteboard server;
  • obtain client viewport data in response to the user whiteboard message;
  • render, via the device, a client viewport screen based on the client viewport data; and
  • display the rendered client viewport screen via the touchscreen interface of the device.
  • 94. The medium of embodiment 93, wherein the device is a mobile device.
  • 95. The medium of embodiment 93, wherein the user whiteboarding instruction is a client viewport modification instruction.
  • 96. The medium of embodiment 93, wherein the user whiteboarding instruction includes an instruction to modify a tile object displayed within the client viewport screen via the touchscreen interface of the device.
  • 97. The medium of embodiment 93, wherein the user whiteboarding instructions includes an instruction to display an evolution of tile content displayed within the client viewport screen via the touchscreen interface of the device.
  • 98. The medium of embodiment 97, wherein the evolution of the tile content is presented as a video file.
  • 99. The medium of embodiment 97, wherein the wherein the tile content includes at least one of: a remote window object; an audio-visual object; and a multi-page document.
  • 100. The medium of embodiment 93, wherein the rendered client viewport screen depicts tile content of a portion of a tile included in the digital collaborative whiteboarding session.
  • 101. The medium of embodiment 95, wherein the client viewport modification instruction includes an instruction to modify the content of the rendered client viewport screen to depict content of another rendered client viewport screen of another device participating in the digital collaborative whiteboarding session.
  • 102. The medium of embodiment 95, wherein the rendered client viewport screen is customized automatically to an attribute of the touchscreen interface of the device.
  • In order to address various issues and advance the art, the entirety of this application (including the Cover Page, Title, Headings, Field, Background, Summary, Brief Description of the Drawings, Detailed Description, Claims, Abstract, Figures, Appendices and/or otherwise) shows by way of illustration various example embodiments in which the claimed innovations may be practiced. The advantages and features of the application are of a representative sample of embodiments only, and are not exhaustive and/or exclusive. They are presented only to assist in understanding and teach the claimed principles. It should be understood that they are not representative of all claimed innovations. As such, certain aspects of the disclosure have not been discussed herein. That alternate embodiments may not have been presented for a specific portion of the innovations or that further undescribed alternate embodiments may be available for a portion is not to be considered a disclaimer of those alternate embodiments. It will be appreciated that many of those undescribed embodiments incorporate the same principles of the innovations and others are equivalent. Thus, it is to be understood that other embodiments may be utilized and functional, logical, operational, organizational, structural and/or topological modifications may be made without departing from the scope and/or spirit of the disclosure. As such, all examples and/or embodiments are deemed to be non-limiting throughout this disclosure. Also, no inference should be drawn regarding those embodiments discussed herein relative to those not discussed herein other than it is as such for purposes of reducing space and repetition. For instance, it is to be understood that the logical and/or topological structure of any combination of any data flow sequence(s), program components (a component collection), other components and/or any present feature sets as described in the figures and/or throughout are not limited to a fixed operating order and/or arrangement, but rather, any disclosed order is exemplary and all equivalents, regardless of order, are contemplated by the disclosure. Furthermore, it is to be understood that such features are not limited to serial execution, but rather, any number of threads, processes, processors, services, servers, and/or the like that may execute asynchronously, concurrently, in parallel, simultaneously, synchronously, and/or the like are also contemplated by the disclosure. As such, some of these features may be mutually contradictory, in that they cannot be simultaneously present in a single embodiment. Similarly, some features are applicable to one aspect of the innovations, and inapplicable to others. In addition, the disclosure includes other innovations not presently claimed. Applicant reserves all rights in those presently unclaimed innovations, including the right to claim such innovations, file additional applications, continuations, continuations-in-part, divisions, and/or the like thereof. As such, it should be understood that advantages, embodiments, examples, functional, features, logical, operational, organizational, structural, topological, and/or other aspects of the disclosure are not to be considered limitations on the disclosure as defined by the claims or limitations on equivalents to the claims. It is to be understood that, depending on the particular needs and/or characteristics of a DWE individual and/or enterprise user, database configuration and/or relational model, data type, data transmission and/or network framework, syntax structure, and/or the like, various embodiments of the DWE may be implemented that allow a great deal of flexibility and customization. For example, aspects of the DWE may be adapted for negotiations, mediation, group think studies, crowd-sourcing applications, and/or the like. While various embodiments and discussions of the DWE have been directed to digital collaboration, however, it is to be understood that the embodiments described herein may be readily configured and/or customized for a wide variety of other applications and/or implementations.

Claims (23)

What is claimed is:
1. A system comprising an interactive computer display device, the interactive computer display device comprising:
a display surface in which a vertically central section of the display surface is touch sensitive, and a section of the display surface either above or below the vertically central section of the display surface is touch insensitive;
a memory; and
a data processor coupled to the memory, the data processor configured to display workspace objects within the vertically central touch sensitive section, within the touch insensitive section, and partially within both the touch sensitive section and the touch insensitive section.
2. The system of claim 1, wherein the touch insensitive section extends from the top of the vertically central section of the display surface up to the top of the display surface,
and wherein the display surface further has a lower touch insensitive section extending from the bottom of the vertically central section of the display surface down to the bottom of the display surface.
3. The system of claim 1, wherein the display surface, when mounted in an upright orientation, is tall enough to extend above the eye level and below waste level of an average American adult.
4. The system of claim 3, wherein the vertically central section of the display surface has a height approximately equal to the distance between the eye level and the waste level of an average American adult.
5. The system of claim 4, wherein the touch insensitive section extends from the top of the vertically central section of the display surface up to the top of the display surface,
and wherein the display surface further has a lower touch insensitive section extending from the bottom of the vertically central section of the display surface down to the bottom of the display surface.
6. The system of claim 1, mounted such that the vertically central section of the display surface extends from the eye level to the waste level of an average American adult.
7. The system of claim 6, wherein the touch insensitive section extends from the top of the vertically central section of the display surface up to the top of the display surface,
and wherein the display surface further has a lower touch insensitive section extending from the bottom of the vertically central section of the display surface down to the bottom of the display surface.
8. The system of claim 1, for use by a user, wherein the display surface, when mounted in an upright orientation, is tall enough to extend above the user's eye level and below the user's waste level.
9. The system of claim 1, wherein the touch insensitive section extends from the top of the vertically central section of the display surface up to the top of the display surface,
wherein the display surface further has a lower touch insensitive section extending from the bottom of the vertically central section of the display surface down to the bottom of the display surface,
wherein the display surface comprises an array of adjoining rectangular display cells each oriented horizontally when the display surface is mounted in the upright orientation,
wherein the vertically central section has a height of no more than three display cells,
and wherein the touch insensitive sections of the display surface each have a height of at least one display cell.
10. The system of claim 8, wherein the vertically central section has a height of no more than two display cells.
11. The device of claim 8, wherein the array of adjoining rectangular display cells has a width of at least five display cells.
12. The system of claim 1, wherein the data processor configured to:
detect user behavior identifying a particular position on the vertically central section of the display surface; and
display a touch-responsive toolbar at a position on the display surface which is dependent upon the particular position.
13. The system of claim 12, wherein the data processor is configured to display the toolbar such that no part of the toolbar ever extends above or below the vertically central section of the display surface.
14. The system of claim 1, further comprising:
a memory; and
a data processor coupled to the memory, the data processor configured to:
receive user input from the vertically central section of the display surface;
modify an attribute of a collaborative workspace in dependence upon the user input;
generate an updated object in dependence upon the modified attribute; and
display the updated object on the display surface.
15. The system of claim 14, wherein in displaying the updated object on the display surface, the data processor is further configured to display the updated object at least partially within the touch insensitive section.
16. The system of claim 14, wherein in displaying the updated object on the display surface, the data processor is configured to display the updated object partially within the touch insensitive section and partially within the vertically central touch sensitive section.
17. The system of claim 1, further comprising:
a memory;
a network interface; and
a data processor coupled to the memory, the data processor configured to:
modify an attribute of a collaborative workspace in dependence upon an event received from another touch sensitive device via the network interface;
generate an updated object in dependence upon the modified attribute; and
display the updated object on the display surface.
18. The system of claim 17, wherein in displaying the updated object on the display surface, the data processor is further configured to display the updated object partially within the touch insensitive section and partially within the vertically central touch sensitive section.
19. The system of claim 1, wherein the display surface is not entirely coplanar.
20. The system of claim 19, wherein the display surface includes first and second respectively planar display portions, the first and second display portions being adjacent to each other but non-coplanar with each other.
21. The system of claim 20, wherein the vertically central section and the touch insensitive section of the display surface are both within the first display portion,
and wherein the second display portion is adjacent horizontally to the first display portion.
22. The system of claim 21, wherein the second display portion is angled relative to the first display portion such that a user position exists from which displays of both display portions are visible.
23. The system of claim 21, wherein the second display portion is entirely touch insensitive.
US14/018,370 2011-05-23 2013-09-04 Digital workspace ergonomics apparatuses, methods and systems Abandoned US20140055400A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US14/018,370 US20140055400A1 (en) 2011-05-23 2013-09-04 Digital workspace ergonomics apparatuses, methods and systems
PCT/US2013/058261 WO2014039680A1 (en) 2012-09-05 2013-09-05 Digital workspace ergonomics apparatuses, methods and systems
PCT/US2013/058249 WO2014039670A1 (en) 2012-09-05 2013-09-05 Digital workspace ergonomics apparatuses, methods and systems
US15/457,752 US11740915B2 (en) 2011-05-23 2017-03-13 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/217,451 US20230350703A1 (en) 2011-05-23 2023-06-30 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/226,187 US11886896B2 (en) 2011-05-23 2023-07-25 Ergonomic digital collaborative workspace apparatuses, methods and systems

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161489238P 2011-05-23 2011-05-23
US13/478,994 US9430140B2 (en) 2011-05-23 2012-05-23 Digital whiteboard collaboration apparatuses, methods and systems
US201261697248P 2012-09-05 2012-09-05
US14/018,370 US20140055400A1 (en) 2011-05-23 2013-09-04 Digital workspace ergonomics apparatuses, methods and systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/478,994 Continuation-In-Part US9430140B2 (en) 2011-05-23 2012-05-23 Digital whiteboard collaboration apparatuses, methods and systems

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/457,752 Continuation US11740915B2 (en) 2011-05-23 2017-03-13 Ergonomic digital collaborative workspace apparatuses, methods and systems

Publications (1)

Publication Number Publication Date
US20140055400A1 true US20140055400A1 (en) 2014-02-27

Family

ID=50147555

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/018,370 Abandoned US20140055400A1 (en) 2011-05-23 2013-09-04 Digital workspace ergonomics apparatuses, methods and systems
US15/457,752 Active 2032-07-24 US11740915B2 (en) 2011-05-23 2017-03-13 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/217,451 Pending US20230350703A1 (en) 2011-05-23 2023-06-30 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/226,187 Active US11886896B2 (en) 2011-05-23 2023-07-25 Ergonomic digital collaborative workspace apparatuses, methods and systems

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/457,752 Active 2032-07-24 US11740915B2 (en) 2011-05-23 2017-03-13 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/217,451 Pending US20230350703A1 (en) 2011-05-23 2023-06-30 Ergonomic digital collaborative workspace apparatuses, methods and systems
US18/226,187 Active US11886896B2 (en) 2011-05-23 2023-07-25 Ergonomic digital collaborative workspace apparatuses, methods and systems

Country Status (1)

Country Link
US (4) US20140055400A1 (en)

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140380198A1 (en) * 2013-06-24 2014-12-25 Xiaomi Inc. Method, device, and terminal apparatus for processing session based on gesture
US20150116468A1 (en) * 2013-10-31 2015-04-30 Ati Technologies Ulc Single display pipe multi-view frame composer method and apparatus
US20150256638A1 (en) * 2014-03-05 2015-09-10 Ricoh Co., Ltd. Fairly Adding Documents to a Collaborative Session
USD738909S1 (en) * 2014-01-09 2015-09-15 Microsoft Corporation Display screen with animated graphical user interface
US20150324041A1 (en) * 2014-05-06 2015-11-12 Symbol Technologies, Inc. Apparatus and method for activating a trigger mechanism
US20160026358A1 (en) * 2014-07-28 2016-01-28 Lenovo (Singapore) Pte, Ltd. Gesture-based window management
US20160103552A1 (en) * 2013-05-03 2016-04-14 Samsung Electronics Co., Ltd Screen operation method for electronic device based on electronic device and control action
US20160140944A1 (en) * 2013-06-04 2016-05-19 Berggram Development Oy Grid based user interference for chord presentation on a touch screen device
US20160170551A1 (en) * 2014-12-15 2016-06-16 Salt International Corp. Refreshing method of background signal and device for applying the method
WO2016133534A1 (en) 2015-02-20 2016-08-25 Hewlett-Packard Development Company, L.P. An automatically invoked unified visualization interface
US9465434B2 (en) 2011-05-23 2016-10-11 Haworth, Inc. Toolbar dynamics for digital whiteboard
US9471192B2 (en) 2011-05-23 2016-10-18 Haworth, Inc. Region dynamics for digital whiteboard
US9479548B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard access to global collaboration data
US9479549B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard with federated display
US20160320954A1 (en) * 2015-04-30 2016-11-03 Elwha Llc One-touch replay for whiteboard
US20170068414A1 (en) * 2015-09-09 2017-03-09 Microsoft Technology Licensing, Llc Controlling a device
US20170205987A1 (en) * 2016-01-15 2017-07-20 Pearson Education, Inc. Interactive presentation controls
EP3203365A1 (en) * 2016-02-05 2017-08-09 Prysm, Inc. Cross platform annotation syncing
US20180181231A1 (en) * 2015-06-12 2018-06-28 Sharp Kabushiki Kaisha Eraser device and command input system
US10063660B1 (en) 2018-02-09 2018-08-28 Picmonkey, Llc Collaborative editing of media in a mixed computing environment
USD834059S1 (en) * 2017-03-02 2018-11-20 Navitaire Llc Display screen with animated graphical user interface
US10191890B2 (en) * 2014-12-17 2019-01-29 Microsoft Technology Licensing, Llc Persistent viewports
WO2019067125A1 (en) * 2017-09-29 2019-04-04 Dropbox, Inc. Managing content item collections
US10255023B2 (en) 2016-02-12 2019-04-09 Haworth, Inc. Collaborative electronic whiteboard publication process
US10304037B2 (en) 2013-02-04 2019-05-28 Haworth, Inc. Collaboration system including a spatial event map
US10346000B2 (en) * 2014-02-18 2019-07-09 Sony Corporation Information processing apparatus and method, information processing system for improved security level in browsing of content
CN110069256A (en) * 2019-04-23 2019-07-30 北京三快在线科技有限公司 Draw method, apparatus, terminal and the storage medium of component
US20190273767A1 (en) * 2018-03-02 2019-09-05 Ricoh Company, Ltd. Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices
WO2019183726A1 (en) * 2018-03-27 2019-10-03 Vizetto Inc. Systems and methods for multi-screen display and interaction
US20200019292A1 (en) * 2016-09-30 2020-01-16 Sap Se Synchronized calendar and timeline adaptive user interface
US10545658B2 (en) 2017-04-25 2020-01-28 Haworth, Inc. Object processing and selection gestures for forming relationships among objects in a collaboration system
US10579240B2 (en) * 2018-02-09 2020-03-03 Picmonkey, Llc Live-rendered and forkable graphic edit trails
US10592595B2 (en) 2017-09-29 2020-03-17 Dropbox, Inc. Maintaining multiple versions of a collection of content items
US10761719B2 (en) * 2017-11-09 2020-09-01 Microsoft Technology Licensing, Llc User interface code generation based on free-hand input
US10802783B2 (en) 2015-05-06 2020-10-13 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US10860985B2 (en) 2016-10-11 2020-12-08 Ricoh Company, Ltd. Post-meeting processing using artificial intelligence
US10922426B2 (en) 2017-09-29 2021-02-16 Dropbox, Inc. Managing content item collections
US10956875B2 (en) 2017-10-09 2021-03-23 Ricoh Company, Ltd. Attendance tracking, presentation files, meeting services and agenda extraction for interactive whiteboard appliances
US11030585B2 (en) 2017-10-09 2021-06-08 Ricoh Company, Ltd. Person detection, person identification and meeting start for interactive whiteboard appliances
US11038973B2 (en) 2017-10-19 2021-06-15 Dropbox, Inc. Contact event feeds and activity updates
US11061639B2 (en) * 2018-03-14 2021-07-13 Ricoh Company, Ltd. Electronic whiteboard system, electronic whiteboard, and method of displaying content data
US11062271B2 (en) 2017-10-09 2021-07-13 Ricoh Company, Ltd. Interactive whiteboard appliances with learning capabilities
US11061547B1 (en) * 2013-03-15 2021-07-13 Study Social, Inc. Collaborative, social online education and whiteboard techniques
US11080466B2 (en) 2019-03-15 2021-08-03 Ricoh Company, Ltd. Updating existing content suggestion to include suggestions from recorded media using artificial intelligence
US11120342B2 (en) 2015-11-10 2021-09-14 Ricoh Company, Ltd. Electronic meeting intelligence
US11126325B2 (en) 2017-10-23 2021-09-21 Haworth, Inc. Virtual workspace including shared viewport markers in a collaboration system
US11163866B2 (en) * 2017-03-31 2021-11-02 Ricoh Company, Ltd. Shared terminal, display control method, and non-transitory computer-readable medium
US11205009B2 (en) * 2018-11-29 2021-12-21 Ricoh Company, Ltd. Information processing apparatus, information processing system, and control method
US11212127B2 (en) 2020-05-07 2021-12-28 Haworth, Inc. Digital workspace sharing over one or more display clients and authorization protocols for collaboration systems
US11222162B2 (en) 2017-09-29 2022-01-11 Dropbox, Inc. Managing content item collections
US11263384B2 (en) 2019-03-15 2022-03-01 Ricoh Company, Ltd. Generating document edit requests for electronic documents managed by a third-party document management service using artificial intelligence
US11270060B2 (en) 2019-03-15 2022-03-08 Ricoh Company, Ltd. Generating suggested document edits from recorded media using artificial intelligence
US11307735B2 (en) 2016-10-11 2022-04-19 Ricoh Company, Ltd. Creating agendas for electronic meetings using artificial intelligence
US20220188054A1 (en) * 2018-09-30 2022-06-16 Shanghai Dalong Technology Co., Ltd. Virtual input device-based method and system for remotely controlling pc
US11392754B2 (en) 2019-03-15 2022-07-19 Ricoh Company, Ltd. Artificial intelligence assisted review of physical documents
US11429263B1 (en) * 2019-08-20 2022-08-30 Lenovo (Singapore) Pte. Ltd. Window placement based on user location
US11573694B2 (en) 2019-02-25 2023-02-07 Haworth, Inc. Gesture based workflows in a collaboration system
US11573993B2 (en) 2019-03-15 2023-02-07 Ricoh Company, Ltd. Generating a meeting review document that includes links to the one or more documents reviewed
US11720741B2 (en) 2019-03-15 2023-08-08 Ricoh Company, Ltd. Artificial intelligence assisted review of electronic documents
US11740915B2 (en) 2011-05-23 2023-08-29 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US11750672B2 (en) 2020-05-07 2023-09-05 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US11861561B2 (en) 2013-02-04 2024-01-02 Haworth, Inc. Collaboration system including a spatial event map
US20240078142A1 (en) * 2022-09-02 2024-03-07 Dell Products, L.P. Managing user engagement during collaboration sessions in heterogenous computing platforms
US11934637B2 (en) 2017-10-23 2024-03-19 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10719132B2 (en) * 2014-06-19 2020-07-21 Samsung Electronics Co., Ltd. Device and method of controlling device
US10089056B2 (en) 2015-06-07 2018-10-02 Apple Inc. Device, method, and graphical user interface for collaborative editing in documents
KR20200014128A (en) * 2018-07-31 2020-02-10 삼성전자주식회사 Electronic device and method for executing application using both of display in the electronic device and external display
US11003353B2 (en) * 2019-04-11 2021-05-11 Microsoft Technology Licensing, Llc Method and system of enhanced interaction with a shared screen
US11656679B2 (en) * 2020-08-27 2023-05-23 Microsoft Technology Licensing, Llc Manipulator-based image reprojection
JP2022145219A (en) * 2021-03-19 2022-10-03 株式会社リコー Display device, data sharing system, display control method and program
US11785060B2 (en) 2021-07-29 2023-10-10 Zoom Video Communications, Inc. Content-aware device selection for modifying content elements in digital collaboration spaces

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6518957B1 (en) * 1999-08-13 2003-02-11 Nokia Mobile Phones Limited Communications device with touch sensitive screen
US20050237380A1 (en) * 2004-04-23 2005-10-27 Toshiaki Kakii Coding method for notion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20060161871A1 (en) * 2004-07-30 2006-07-20 Apple Computer, Inc. Proximity detector in handheld device
US20080158178A1 (en) * 2007-01-03 2008-07-03 Apple Inc. Front-end signal compensation
US20100318470A1 (en) * 2009-05-13 2010-12-16 Christoph Meinel Means for Processing Information
US20100315481A1 (en) * 2009-06-10 2010-12-16 Alcatel-Lucent Usa Inc. Portable video conferencing system with universal focal point
US20110109526A1 (en) * 2009-11-09 2011-05-12 Qualcomm Incorporated Multi-screen image display
US20110148926A1 (en) * 2009-12-17 2011-06-23 Lg Electronics Inc. Image display apparatus and method for operating the image display apparatus
US20110216064A1 (en) * 2008-09-08 2011-09-08 Qualcomm Incorporated Sending a parameter based on screen size or screen resolution of a multi-panel electronic device to a server
US20120011465A1 (en) * 2010-07-06 2012-01-12 Marcelo Amaral Rezende Digital whiteboard system
US20120026200A1 (en) * 2010-07-05 2012-02-02 Lenovo (Singapore) Pte, Ltd. Information input device, on-screen arrangement method thereof, and computer-executable program
US20120038572A1 (en) * 2010-08-14 2012-02-16 Samsung Electronics Co., Ltd. System and method for preventing touch malfunction in a mobile device
US20150084055A1 (en) * 2013-09-26 2015-03-26 Japan Display Inc. Display device

Family Cites Families (164)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4686332A (en) 1986-06-26 1987-08-11 International Business Machines Corporation Combined finger touch and stylus detection system for use on the viewing surface of a visual display device
US5394521A (en) 1991-12-09 1995-02-28 Xerox Corporation User interface with multiple workspaces for sharing display system objects
US5233687A (en) * 1987-03-25 1993-08-03 Xerox Corporation User interface with multiple workspaces for sharing display system objects
US5220657A (en) 1987-12-02 1993-06-15 Xerox Corporation Updating local copy of shared data in a collaborative system
US5008853A (en) 1987-12-02 1991-04-16 Xerox Corporation Representation of collaborative multi-user activities relative to shared structured data objects in a networked workstation environment
US5309555A (en) 1990-05-15 1994-05-03 International Business Machines Corporation Realtime communication of hand drawn images in a multiprogramming window environment
US5563996A (en) 1992-04-13 1996-10-08 Apple Computer, Inc. Computer note pad including gesture based note division tools and method
US5446842A (en) 1993-02-26 1995-08-29 Taligent, Inc. Object-oriented collaboration system
US5835713A (en) 1993-03-19 1998-11-10 Ncr Corporation Remote collaboration system for selectively locking the display at remote computers to prevent annotation of the display by users of the remote computers
EP0622930A3 (en) 1993-03-19 1996-06-05 At & T Global Inf Solution Application sharing for computer collaboration system.
US7185054B1 (en) * 1993-10-01 2007-02-27 Collaboration Properties, Inc. Participant display and selection in video conference calls
US5537526A (en) 1993-11-12 1996-07-16 Taugent, Inc. Method and apparatus for processing a display document utilizing a system level document framework
US5553083B1 (en) 1995-01-19 2000-05-16 Starburst Comm Corp Method for quickly and reliably transmitting frames of data over communications links
JPH08305663A (en) 1995-04-28 1996-11-22 Hitachi Ltd Teamwork support system
JPH08320755A (en) 1995-05-25 1996-12-03 Canon Inc Information processor
US6911987B1 (en) 1995-07-05 2005-06-28 Microsoft Corporation Method and system for transmitting data for a shared application
US5867156A (en) * 1995-11-08 1999-02-02 Intel Corporation Automatic viewport display synchronization during application sharing
US6343313B1 (en) 1996-03-26 2002-01-29 Pixion, Inc. Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability
US5818425A (en) 1996-04-03 1998-10-06 Xerox Corporation Mapping drawings generated on small mobile pen based electronic devices onto large displays
US5781732A (en) 1996-06-20 1998-07-14 Object Technology Licensing Corp. Framework for constructing shared documents that can be collaboratively accessed by multiple users
US6084584A (en) 1996-10-01 2000-07-04 Diamond Multimedia Systems, Inc. Computer system supporting portable interactive graphics display tablet and communications systems
JPH10198517A (en) 1997-01-10 1998-07-31 Tokyo Noukou Univ Method for controlling display content of display device
US5940082A (en) 1997-02-14 1999-08-17 Brinegar; David System and method for distributed collaborative drawing
US6035342A (en) 1997-03-19 2000-03-07 Microsoft Corporation Method and computer program product for implementing object relationships
US6167433A (en) 1997-08-01 2000-12-26 Muse Technologies, Inc. Shared multi-user interface for multi-dimensional synthetic environments
US6078921A (en) 1998-03-03 2000-06-20 Trellix Corporation Method and apparatus for providing a self-service file
CA2267733A1 (en) 1998-04-06 1999-10-06 Smart Technologies, Inc. Method for editing objects representing writing on an electronic writeboard
US6710790B1 (en) 1998-08-13 2004-03-23 Symantec Corporation Methods and apparatus for tracking the active window of a host computer in a remote computer display window
US6351777B1 (en) * 1999-04-23 2002-02-26 The United States Of America As Represented By The Secretary Of The Navy Computer software for converting a general purpose computer network into an interactive communications system
US7043529B1 (en) 1999-04-23 2006-05-09 The United States Of America As Represented By The Secretary Of The Navy Collaborative development network for widely dispersed users and methods therefor
US6564246B1 (en) 1999-02-02 2003-05-13 International Business Machines Corporation Shared and independent views of shared workspace for real-time collaboration
US6342906B1 (en) 1999-02-02 2002-01-29 International Business Machines Corporation Annotation layer for synchronous collaboration
US6868425B1 (en) 1999-03-05 2005-03-15 Microsoft Corporation Versions and workspaces in an object repository
US7028264B2 (en) 1999-10-29 2006-04-11 Surfcast, Inc. System and method for simultaneous display of multiple information sources
WO2001061633A2 (en) 2000-02-15 2001-08-23 Siemens Technology-To-Business Center, Llc Electronic whiteboard system using a tactile foam sensor
EP1277104A1 (en) 2000-03-30 2003-01-22 Ideogramic APS Method for gesture based modeling
US7171448B1 (en) 2000-04-17 2007-01-30 Accenture Ans Conducting activities in a collaborative work tool architecture
US6930673B2 (en) 2000-11-13 2005-08-16 Gtco Calcomp Collaborative input system
US7003728B2 (en) 2000-12-21 2006-02-21 David Berque System for knowledge transfer in a group setting
WO2002073507A2 (en) 2001-03-14 2002-09-19 Given Imaging Ltd. Method and system for detecting colorimetric abnormalities
US6778989B2 (en) 2001-07-03 2004-08-17 International Business Machines Corporation System and method for constructing and viewing an electronic document
JP4250884B2 (en) 2001-09-05 2009-04-08 パナソニック株式会社 Electronic blackboard system
US7356563B1 (en) 2002-06-06 2008-04-08 Microsoft Corporation Methods of annotating a collaborative application display
US8125459B2 (en) 2007-10-01 2012-02-28 Igt Multi-user input systems and processing techniques for serving multiple users
EP1547009A1 (en) 2002-09-20 2005-06-29 Board Of Regents The University Of Texas System Computer program products, systems and methods for information discovery and relational analyses
TWI220973B (en) 2002-11-22 2004-09-11 Macroblock Inc Device and set for driving display device
US7624143B2 (en) 2002-12-12 2009-11-24 Xerox Corporation Methods, apparatus, and program products for utilizing contextual property metadata in networked computing environments
US7129934B2 (en) 2003-01-31 2006-10-31 Hewlett-Packard Development Company, L.P. Collaborative markup projection system
WO2004070396A2 (en) 2003-02-10 2004-08-19 N-Trig Ltd. Touch detection for a digitizer
US7369102B2 (en) 2003-03-04 2008-05-06 Microsoft Corporation System and method for navigating a graphical user interface on a smaller display
US7269794B2 (en) 2003-09-11 2007-09-11 International Business Machines Corporation Method and apparatus for viewpoint collaboration
US7275212B2 (en) * 2003-10-23 2007-09-25 Microsoft Corporation Synchronized graphics and region data for graphics remoting systems
US7765143B1 (en) * 2003-11-04 2010-07-27 Trading Technologies International, Inc. System and method for event driven virtual workspace
US7460134B2 (en) 2004-03-02 2008-12-02 Microsoft Corporation System and method for moving computer displayable content into a preferred user interactive focus area
US7262783B2 (en) 2004-03-03 2007-08-28 Virtual Iris Studios, Inc. System for delivering and enabling interactivity with images
WO2006118558A1 (en) 2004-04-14 2006-11-09 Sagi Richberg Method and system for connecting users
US20050273700A1 (en) 2004-06-02 2005-12-08 Amx Corporation Computer system with user interface having annotation capability
US7450109B2 (en) 2004-07-13 2008-11-11 International Business Machines Corporation Electronic whiteboard
JP4795343B2 (en) 2004-07-15 2011-10-19 エヌ−トリグ リミテッド Automatic switching of dual mode digitizer
US8456506B2 (en) 2004-08-03 2013-06-04 Applied Minds, Llc Systems and methods for enhancing teleconferencing collaboration
US7728823B2 (en) 2004-09-24 2010-06-01 Apple Inc. System and method for processing raw data of track pad device
US7664870B2 (en) 2005-02-25 2010-02-16 Microsoft Corporation Method and system for providing users a lower fidelity alternative until a higher fidelity experience is available
WO2006094291A1 (en) 2005-03-03 2006-09-08 Raytheon Company Incident command system
US20060224427A1 (en) 2005-03-30 2006-10-05 International Business Machines Corporation Method, system, and program product for individual and group work space allocation and utilization
JP4664108B2 (en) 2005-03-31 2011-04-06 富士通株式会社 Display device, display method, display program, and display system
US7908325B1 (en) 2005-06-20 2011-03-15 Oracle America, Inc. System and method for event-based collaboration
US20070198744A1 (en) 2005-11-30 2007-08-23 Ava Mobile, Inc. System, method, and computer program product for concurrent collaboration of media
US20090278806A1 (en) 2008-05-06 2009-11-12 Matias Gonzalo Duarte Extended touch-sensitive control area for electronic device
US8209308B2 (en) 2006-05-01 2012-06-26 Rueben Steven L Method for presentation of revisions of an electronic document
US9063647B2 (en) 2006-05-12 2015-06-23 Microsoft Technology Licensing, Llc Multi-touch uses, gestures, and implementation
US7870193B2 (en) * 2006-08-28 2011-01-11 International Business Machines Corporation Collaborative, event driven system management
US9201854B1 (en) 2006-10-25 2015-12-01 Hewlett-Packard Development Company, L.P. Methods and systems for creating, interacting with, and utilizing a superactive document
US20080163053A1 (en) 2006-12-28 2008-07-03 Samsung Electronics Co., Ltd. Method to provide menu, using menu set and multimedia device using the same
USD617338S1 (en) 2007-01-05 2010-06-08 Verizon Patent And Licensing Inc. Computer generated image for a display panel or screen
US7877707B2 (en) 2007-01-06 2011-01-25 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20080177771A1 (en) 2007-01-19 2008-07-24 International Business Machines Corporation Method and system for multi-location collaboration
US8351989B2 (en) 2007-02-23 2013-01-08 Lg Electronics Inc. Method of displaying menu in a mobile communication terminal
US8214367B2 (en) 2007-02-27 2012-07-03 The Trustees Of Columbia University In The City Of New York Systems, methods, means, and media for recording, searching, and outputting display information
WO2009018314A2 (en) 2007-07-30 2009-02-05 Perceptive Pixel, Inc. Graphical user interface for large-scale, multi-user, multi-touch systems
US20090089682A1 (en) 2007-09-27 2009-04-02 Rockwell Automation Technologies, Inc. Collaborative environment for sharing visualizations of industrial automation data
US20140032770A1 (en) 2007-09-28 2014-01-30 Adobe Systems Incorporated Declarative specification of collaboration client functionality
US9335869B2 (en) 2007-10-01 2016-05-10 Igt Method and apparatus for detecting lift off on a touchscreen
JP2011503709A (en) 2007-11-07 2011-01-27 エヌ−トリグ リミテッド Gesture detection for digitizer
KR100996682B1 (en) * 2007-11-30 2010-11-25 주식회사 모션클라우드 Rich Content Creation System and Method Thereof, and Media That Can Record Computer Program for Method Thereof
AR064377A1 (en) 2007-12-17 2009-04-01 Rovere Victor Manuel Suarez DEVICE FOR SENSING MULTIPLE CONTACT AREAS AGAINST OBJECTS SIMULTANEOUSLY
US20090234721A1 (en) 2007-12-21 2009-09-17 Bigelow David H Persistent collaborative on-line meeting space
US20090160786A1 (en) 2007-12-21 2009-06-25 Dean Finnegan Touch control electronic display
US20090174679A1 (en) 2008-01-04 2009-07-09 Wayne Carl Westerman Selective Rejection of Touch Contacts in an Edge Region of a Touch Surface
US20110063191A1 (en) * 2008-01-07 2011-03-17 Smart Technologies Ulc Method of managing applications in a multi-monitor computer system and multi-monitor computer system employing the method
JP2009169735A (en) 2008-01-17 2009-07-30 Sharp Corp Information processing display device
US9965638B2 (en) 2008-01-28 2018-05-08 Adobe Systems Incorporated Rights application within document-based conferencing
JP2009193323A (en) 2008-02-14 2009-08-27 Sharp Corp Display apparatus
WO2009105544A2 (en) 2008-02-19 2009-08-27 The Board Of Trustees Of The University Of Illinois Large format high resolution interactive display
US8473851B2 (en) * 2008-02-27 2013-06-25 Cisco Technology, Inc. Multi-party virtual desktop
US8531447B2 (en) 2008-04-03 2013-09-10 Cisco Technology, Inc. Reactive virtual environment
US8176434B2 (en) 2008-05-12 2012-05-08 Microsoft Corporation Virtual desktop view scrolling
KR20090120891A (en) 2008-05-21 2009-11-25 (주)앞선교육 Remote device and system for transmitting picture of real object and written data, and pairing method thereof, and presentation method using the same
US20090309846A1 (en) 2008-06-11 2009-12-17 Marc Trachtenberg Surface computing collaboration system, method and apparatus
US20090309853A1 (en) * 2008-06-13 2009-12-17 Polyvision Corporation Electronic whiteboard system and assembly with optical detection elements
US8275197B2 (en) 2008-06-14 2012-09-25 Microsoft Corporation Techniques to manage a whiteboard for multimedia conference events
US8271887B2 (en) * 2008-07-17 2012-09-18 The Boeing Company Systems and methods for whiteboard collaboration and annotation
NO333026B1 (en) 2008-09-17 2013-02-18 Cisco Systems Int Sarl Control system for a local telepresence video conferencing system and method for establishing a video conferencing call.
US8402391B1 (en) 2008-09-25 2013-03-19 Apple, Inc. Collaboration system
JP2010079834A (en) 2008-09-29 2010-04-08 Hitachi Software Eng Co Ltd Device for determination of mounting position of coordinate detection device and electronic board system
GB2453672B (en) * 2008-10-21 2009-09-16 Promethean Ltd Registration for interactive whiteboard
JP2010134897A (en) 2008-10-28 2010-06-17 Nippon Telegr & Teleph Corp <Ntt> Drawing device, drawing method, program and recording medium
WO2010057106A2 (en) 2008-11-14 2010-05-20 Virtual Nerd, Llc. Whiteboard presentation of interactive and expandable modular content
US20100306696A1 (en) 2008-11-26 2010-12-02 Lila Aps (Ahead.) Dynamic network browser
US20100131868A1 (en) * 2008-11-26 2010-05-27 Cisco Technology, Inc. Limitedly sharing application windows in application sharing sessions
USD600703S1 (en) 2008-12-02 2009-09-22 Microsoft Corporation Icon for a display screen
US20100205190A1 (en) 2009-02-09 2010-08-12 Microsoft Corporation Surface-based collaborative search
KR101055924B1 (en) * 2009-05-26 2011-08-09 주식회사 팬택 User interface device and method in touch device
US9298834B2 (en) 2009-05-26 2016-03-29 Adobe Systems Incorporated User presence data for web-based document collaboration
US8681106B2 (en) 2009-06-07 2014-03-25 Apple Inc. Devices, methods, and graphical user interfaces for accessibility using a touch-sensitive surface
US20100318921A1 (en) 2009-06-16 2010-12-16 Marc Trachtenberg Digital easel collaboration system and method
US9135599B2 (en) 2009-06-18 2015-09-15 Microsoft Technology Licensing, Llc Smart notebook
WO2011000284A1 (en) 2009-06-30 2011-01-06 Techbridge, Inc. A multimedia collaboration system
CN101630240B (en) 2009-08-18 2011-11-09 深圳雅图数字视频技术有限公司 Electronic white board equipment and drawing method thereof
US8407290B2 (en) 2009-08-31 2013-03-26 International Business Machines Corporation Dynamic data sharing using a collaboration-enabled web browser
WO2011029067A2 (en) 2009-09-03 2011-03-10 Obscura Digital, Inc. Large scale multi-user, multi-touch system
US20110050640A1 (en) * 2009-09-03 2011-03-03 Niklas Lundback Calibration for a Large Scale Multi-User, Multi-Touch System
KR101390957B1 (en) 2009-09-04 2014-05-02 나이키 인터내셔널 엘티디. Monitoring and tracking athletic activity
WO2011048901A1 (en) 2009-10-22 2011-04-28 コニカミノルタホールディングス株式会社 Conference support system
US20110183654A1 (en) 2010-01-25 2011-07-28 Brian Lanier Concurrent Use of Multiple User Interface Devices
US8490002B2 (en) * 2010-02-11 2013-07-16 Apple Inc. Projected display shared workspaces
US8949346B2 (en) 2010-02-25 2015-02-03 Cisco Technology, Inc. System and method for providing a two-tiered virtual communications architecture in a network environment
US20110214063A1 (en) 2010-03-01 2011-09-01 Microsoft Corporation Efficient navigation of and interaction with a remoted desktop that is larger than the local screen
US8818027B2 (en) 2010-04-01 2014-08-26 Qualcomm Incorporated Computing device interface
US20110246875A1 (en) 2010-04-02 2011-10-06 Symantec Corporation Digital whiteboard implementation
KR20110121888A (en) 2010-05-03 2011-11-09 삼성전자주식회사 Apparatus and method for determining the pop-up menu in portable terminal
US20120019453A1 (en) 2010-07-26 2012-01-26 Wayne Carl Westerman Motion continuation of touch input
JP5625615B2 (en) 2010-08-20 2014-11-19 株式会社リコー Electronic information board device
JP5644266B2 (en) 2010-08-30 2014-12-24 株式会社リコー Electronic blackboard system, electronic blackboard device, control method and program for electronic blackboard system
KR101685363B1 (en) 2010-09-27 2016-12-12 엘지전자 주식회사 Mobile terminal and operation method thereof
US8655945B2 (en) * 2010-11-16 2014-02-18 International Business Machines Corporation Centralized rendering of collaborative content
US8502816B2 (en) * 2010-12-02 2013-08-06 Microsoft Corporation Tabletop display providing multiple views to users
US8963961B2 (en) * 2010-12-29 2015-02-24 Sap Se Fractal whiteboarding
US20120176328A1 (en) 2011-01-11 2012-07-12 Egan Teamboard Inc. White board operable by variable pressure inputs
US20120179994A1 (en) 2011-01-12 2012-07-12 Smart Technologies Ulc Method for manipulating a toolbar on an interactive input system and interactive input system executing the method
CN103534674A (en) 2011-02-08 2014-01-22 海沃氏公司 Multimodal touchscreen interaction apparatuses, methods and systems
CN103298582B (en) 2011-02-22 2015-09-02 三菱重工业株式会社 The manufacture method of impeller
US9086798B2 (en) 2011-03-07 2015-07-21 Ricoh Company, Ltd. Associating information on a whiteboard with a user
WO2012135231A2 (en) 2011-04-01 2012-10-04 Social Communications Company Creating virtual areas for realtime communications
US20120260176A1 (en) 2011-04-08 2012-10-11 Google Inc. Gesture-activated input using audio recognition
WO2012149176A2 (en) * 2011-04-26 2012-11-01 Infocus Corporation Interactive and collaborative computing device
US9471192B2 (en) 2011-05-23 2016-10-18 Haworth, Inc. Region dynamics for digital whiteboard
WO2012162411A1 (en) 2011-05-23 2012-11-29 Haworth, Inc. Digital whiteboard collaboration apparatuses, methods and systems
US9465434B2 (en) 2011-05-23 2016-10-11 Haworth, Inc. Toolbar dynamics for digital whiteboard
US20140055400A1 (en) 2011-05-23 2014-02-27 Haworth, Inc. Digital workspace ergonomics apparatuses, methods and systems
US9158445B2 (en) * 2011-05-27 2015-10-13 Microsoft Technology Licensing, Llc Managing an immersive interface in a multi-application immersive environment
US20120320073A1 (en) * 2011-06-14 2012-12-20 Obscura Digital, Inc. Multiple Spatial Partitioning Algorithm Rendering Engine
US9075561B2 (en) * 2011-07-29 2015-07-07 Apple Inc. Systems, methods, and computer-readable media for managing collaboration on a virtual work of art
WO2013104053A1 (en) 2012-01-11 2013-07-18 Smart Technologies Ulc Method of displaying input during a collaboration session and interactive board employing same
US20130218998A1 (en) 2012-02-21 2013-08-22 Anacore, Inc. System, Method, and Computer-Readable Medium for Interactive Collaboration
EP2658232A1 (en) 2012-04-23 2013-10-30 Onmobile Global Limited Method and system for an optimized multimedia communications system
WO2013164022A1 (en) 2012-05-02 2013-11-07 Office For Media And Arts International Gmbh System and method for collaborative computing
US9479549B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard with federated display
US9479548B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard access to global collaboration data
US9319634B2 (en) 2012-07-18 2016-04-19 Polycom, Inc. Facilitating multi-party conferences, including allocating resources needed for conference while establishing connections with participants
US20140040767A1 (en) 2012-08-03 2014-02-06 Oracle International Corporation Shared digital whiteboard
WO2014023432A1 (en) 2012-08-09 2014-02-13 Livestudies Srl Apparatus and method for a collaborative environment using distributed touch sensitive surfaces
US10304037B2 (en) 2013-02-04 2019-05-28 Haworth, Inc. Collaboration system including a spatial event map
US9846526B2 (en) 2013-06-28 2017-12-19 Verizon and Redbox Digital Entertainment Services, LLC Multi-user collaboration tracking methods and systems
JP6296919B2 (en) 2014-06-30 2018-03-20 株式会社東芝 Information processing apparatus and grouping execution / cancellation method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6518957B1 (en) * 1999-08-13 2003-02-11 Nokia Mobile Phones Limited Communications device with touch sensitive screen
US20050237380A1 (en) * 2004-04-23 2005-10-27 Toshiaki Kakii Coding method for notion-image data, decoding method, terminal equipment executing these, and two-way interactive system
US20060161871A1 (en) * 2004-07-30 2006-07-20 Apple Computer, Inc. Proximity detector in handheld device
US20080158178A1 (en) * 2007-01-03 2008-07-03 Apple Inc. Front-end signal compensation
US20110216064A1 (en) * 2008-09-08 2011-09-08 Qualcomm Incorporated Sending a parameter based on screen size or screen resolution of a multi-panel electronic device to a server
US20100318470A1 (en) * 2009-05-13 2010-12-16 Christoph Meinel Means for Processing Information
US20100315481A1 (en) * 2009-06-10 2010-12-16 Alcatel-Lucent Usa Inc. Portable video conferencing system with universal focal point
US20110109526A1 (en) * 2009-11-09 2011-05-12 Qualcomm Incorporated Multi-screen image display
US20110148926A1 (en) * 2009-12-17 2011-06-23 Lg Electronics Inc. Image display apparatus and method for operating the image display apparatus
US20120026200A1 (en) * 2010-07-05 2012-02-02 Lenovo (Singapore) Pte, Ltd. Information input device, on-screen arrangement method thereof, and computer-executable program
US8898590B2 (en) * 2010-07-05 2014-11-25 Lenovo (Singapore) Pte. Ltd. Information input device, on-screen arrangement method thereof, and computer-executable program
US20120011465A1 (en) * 2010-07-06 2012-01-12 Marcelo Amaral Rezende Digital whiteboard system
US20120038572A1 (en) * 2010-08-14 2012-02-16 Samsung Electronics Co., Ltd. System and method for preventing touch malfunction in a mobile device
US20150084055A1 (en) * 2013-09-26 2015-03-26 Japan Display Inc. Display device

Cited By (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465434B2 (en) 2011-05-23 2016-10-11 Haworth, Inc. Toolbar dynamics for digital whiteboard
US11740915B2 (en) 2011-05-23 2023-08-29 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US11886896B2 (en) 2011-05-23 2024-01-30 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US9471192B2 (en) 2011-05-23 2016-10-18 Haworth, Inc. Region dynamics for digital whiteboard
US9479549B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard with federated display
US9479548B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard access to global collaboration data
US10304037B2 (en) 2013-02-04 2019-05-28 Haworth, Inc. Collaboration system including a spatial event map
US11481730B2 (en) 2013-02-04 2022-10-25 Haworth, Inc. Collaboration system including a spatial event map
US11861561B2 (en) 2013-02-04 2024-01-02 Haworth, Inc. Collaboration system including a spatial event map
US11887056B2 (en) 2013-02-04 2024-01-30 Haworth, Inc. Collaboration system including a spatial event map
US10949806B2 (en) 2013-02-04 2021-03-16 Haworth, Inc. Collaboration system including a spatial event map
US11061547B1 (en) * 2013-03-15 2021-07-13 Study Social, Inc. Collaborative, social online education and whiteboard techniques
US20160103552A1 (en) * 2013-05-03 2016-04-14 Samsung Electronics Co., Ltd Screen operation method for electronic device based on electronic device and control action
US10402002B2 (en) * 2013-05-03 2019-09-03 Samsung Electronics Co., Ltd. Screen operation method for electronic device based on electronic device and control action
US9633641B2 (en) * 2013-06-04 2017-04-25 Berggram Development Oy Grid based user interference for chord presentation on a touch screen device
US20160140944A1 (en) * 2013-06-04 2016-05-19 Berggram Development Oy Grid based user interference for chord presentation on a touch screen device
US20140380198A1 (en) * 2013-06-24 2014-12-25 Xiaomi Inc. Method, device, and terminal apparatus for processing session based on gesture
US20190058864A1 (en) * 2013-10-31 2019-02-21 ATI Technologies ULC Markham L3T 7X6 Single display pipe multi-view frame composer method and apparatus
US10142607B2 (en) * 2013-10-31 2018-11-27 Ati Technologies Ulc Single display pipe multi-view frame composer method and apparatus
US20150116468A1 (en) * 2013-10-31 2015-04-30 Ati Technologies Ulc Single display pipe multi-view frame composer method and apparatus
US10904507B2 (en) * 2013-10-31 2021-01-26 Ati Technologies Ulc Single display pipe multi-view frame composer method and apparatus
USD738909S1 (en) * 2014-01-09 2015-09-15 Microsoft Corporation Display screen with animated graphical user interface
US10346000B2 (en) * 2014-02-18 2019-07-09 Sony Corporation Information processing apparatus and method, information processing system for improved security level in browsing of content
US9794078B2 (en) * 2014-03-05 2017-10-17 Ricoh Company, Ltd. Fairly adding documents to a collaborative session
US20150256638A1 (en) * 2014-03-05 2015-09-10 Ricoh Co., Ltd. Fairly Adding Documents to a Collaborative Session
US9501163B2 (en) * 2014-05-06 2016-11-22 Symbol Technologies, Llc Apparatus and method for activating a trigger mechanism
US20150324041A1 (en) * 2014-05-06 2015-11-12 Symbol Technologies, Inc. Apparatus and method for activating a trigger mechanism
US20160026358A1 (en) * 2014-07-28 2016-01-28 Lenovo (Singapore) Pte, Ltd. Gesture-based window management
US9804702B2 (en) * 2014-12-15 2017-10-31 Salt International Corp. Refreshing method of background signal and device for applying the method
US9811198B2 (en) * 2014-12-15 2017-11-07 Salt International Corp. Refreshing method of background signal and device for applying the method
US20160170550A1 (en) * 2014-12-15 2016-06-16 Salt International Corp. Refreshing method of background signal and device for applying the method
US20160170551A1 (en) * 2014-12-15 2016-06-16 Salt International Corp. Refreshing method of background signal and device for applying the method
US10191890B2 (en) * 2014-12-17 2019-01-29 Microsoft Technology Licensing, Llc Persistent viewports
EP3259679A4 (en) * 2015-02-20 2018-08-01 Hewlett-Packard Development Company, L.P. An automatically invoked unified visualization interface
CN107209773A (en) * 2015-02-20 2017-09-26 惠普发展公司,有限责任合伙企业 Automatically unified visualization interface is called
US11138216B2 (en) 2015-02-20 2021-10-05 Hewlett-Packard Development Company, L.P. Automatically invoked unified visualization interface
WO2016133534A1 (en) 2015-02-20 2016-08-25 Hewlett-Packard Development Company, L.P. An automatically invoked unified visualization interface
US20160320954A1 (en) * 2015-04-30 2016-11-03 Elwha Llc One-touch replay for whiteboard
US11797256B2 (en) 2015-05-06 2023-10-24 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11816387B2 (en) 2015-05-06 2023-11-14 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11775246B2 (en) 2015-05-06 2023-10-03 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US10802783B2 (en) 2015-05-06 2020-10-13 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11262969B2 (en) 2015-05-06 2022-03-01 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US20180181231A1 (en) * 2015-06-12 2018-06-28 Sharp Kabushiki Kaisha Eraser device and command input system
US10466850B2 (en) * 2015-06-12 2019-11-05 Sharp Kabushiki Kaisha Eraser device and command input system
US20170068414A1 (en) * 2015-09-09 2017-03-09 Microsoft Technology Licensing, Llc Controlling a device
CN108027699A (en) * 2015-09-09 2018-05-11 微软技术许可有限责任公司 Control multi-user touch screen equipment
US11120342B2 (en) 2015-11-10 2021-09-14 Ricoh Company, Ltd. Electronic meeting intelligence
US10795536B2 (en) * 2016-01-15 2020-10-06 Pearson Education, Inc. Interactive presentation controls
US20170205987A1 (en) * 2016-01-15 2017-07-20 Pearson Education, Inc. Interactive presentation controls
CN108463784A (en) * 2016-01-15 2018-08-28 皮尔森教育有限公司 Interactive demonstration controls
EP3203365A1 (en) * 2016-02-05 2017-08-09 Prysm, Inc. Cross platform annotation syncing
US10705786B2 (en) 2016-02-12 2020-07-07 Haworth, Inc. Collaborative electronic whiteboard publication process
US10255023B2 (en) 2016-02-12 2019-04-09 Haworth, Inc. Collaborative electronic whiteboard publication process
US10942641B2 (en) * 2016-09-30 2021-03-09 Sap Se Synchronized calendar and timeline adaptive user interface
US20200019292A1 (en) * 2016-09-30 2020-01-16 Sap Se Synchronized calendar and timeline adaptive user interface
US11307735B2 (en) 2016-10-11 2022-04-19 Ricoh Company, Ltd. Creating agendas for electronic meetings using artificial intelligence
US10860985B2 (en) 2016-10-11 2020-12-08 Ricoh Company, Ltd. Post-meeting processing using artificial intelligence
USD834059S1 (en) * 2017-03-02 2018-11-20 Navitaire Llc Display screen with animated graphical user interface
US11163866B2 (en) * 2017-03-31 2021-11-02 Ricoh Company, Ltd. Shared terminal, display control method, and non-transitory computer-readable medium
US10545658B2 (en) 2017-04-25 2020-01-28 Haworth, Inc. Object processing and selection gestures for forming relationships among objects in a collaboration system
WO2019067125A1 (en) * 2017-09-29 2019-04-04 Dropbox, Inc. Managing content item collections
AU2018342118B2 (en) * 2017-09-29 2020-11-12 Dropbox, Inc. Managing content item collections
US11630909B2 (en) 2017-09-29 2023-04-18 Dropbox, Inc. Managing content item collections
US11593549B2 (en) 2017-09-29 2023-02-28 Dropbox, Inc. Managing content item collections
US10922426B2 (en) 2017-09-29 2021-02-16 Dropbox, Inc. Managing content item collections
US11222162B2 (en) 2017-09-29 2022-01-11 Dropbox, Inc. Managing content item collections
US10592595B2 (en) 2017-09-29 2020-03-17 Dropbox, Inc. Maintaining multiple versions of a collection of content items
US11030585B2 (en) 2017-10-09 2021-06-08 Ricoh Company, Ltd. Person detection, person identification and meeting start for interactive whiteboard appliances
US10956875B2 (en) 2017-10-09 2021-03-23 Ricoh Company, Ltd. Attendance tracking, presentation files, meeting services and agenda extraction for interactive whiteboard appliances
US11062271B2 (en) 2017-10-09 2021-07-13 Ricoh Company, Ltd. Interactive whiteboard appliances with learning capabilities
US11038973B2 (en) 2017-10-19 2021-06-15 Dropbox, Inc. Contact event feeds and activity updates
US11126325B2 (en) 2017-10-23 2021-09-21 Haworth, Inc. Virtual workspace including shared viewport markers in a collaboration system
US11934637B2 (en) 2017-10-23 2024-03-19 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces
US10761719B2 (en) * 2017-11-09 2020-09-01 Microsoft Technology Licensing, Llc User interface code generation based on free-hand input
US10063660B1 (en) 2018-02-09 2018-08-28 Picmonkey, Llc Collaborative editing of media in a mixed computing environment
US10579240B2 (en) * 2018-02-09 2020-03-03 Picmonkey, Llc Live-rendered and forkable graphic edit trails
US10757148B2 (en) * 2018-03-02 2020-08-25 Ricoh Company, Ltd. Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices
US20190273767A1 (en) * 2018-03-02 2019-09-05 Ricoh Company, Ltd. Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices
US11061639B2 (en) * 2018-03-14 2021-07-13 Ricoh Company, Ltd. Electronic whiteboard system, electronic whiteboard, and method of displaying content data
GB2587095A (en) * 2018-03-27 2021-03-17 Vizetto Inc Systems and methods for multi-screen display and interaction
GB2587095B (en) * 2018-03-27 2023-02-01 Vizetto Inc Systems and methods for multi-screen display and interaction
WO2019183726A1 (en) * 2018-03-27 2019-10-03 Vizetto Inc. Systems and methods for multi-screen display and interaction
CN112384972A (en) * 2018-03-27 2021-02-19 维泽托有限责任公司 System and method for multi-screen display and interaction
US20220188054A1 (en) * 2018-09-30 2022-06-16 Shanghai Dalong Technology Co., Ltd. Virtual input device-based method and system for remotely controlling pc
US11907741B2 (en) * 2018-09-30 2024-02-20 Shanghai Dalong Technology Co., Ltd. Virtual input device-based method and system for remotely controlling PC
US11205009B2 (en) * 2018-11-29 2021-12-21 Ricoh Company, Ltd. Information processing apparatus, information processing system, and control method
US11573694B2 (en) 2019-02-25 2023-02-07 Haworth, Inc. Gesture based workflows in a collaboration system
US11720741B2 (en) 2019-03-15 2023-08-08 Ricoh Company, Ltd. Artificial intelligence assisted review of electronic documents
US11573993B2 (en) 2019-03-15 2023-02-07 Ricoh Company, Ltd. Generating a meeting review document that includes links to the one or more documents reviewed
US11392754B2 (en) 2019-03-15 2022-07-19 Ricoh Company, Ltd. Artificial intelligence assisted review of physical documents
US11270060B2 (en) 2019-03-15 2022-03-08 Ricoh Company, Ltd. Generating suggested document edits from recorded media using artificial intelligence
US11263384B2 (en) 2019-03-15 2022-03-01 Ricoh Company, Ltd. Generating document edit requests for electronic documents managed by a third-party document management service using artificial intelligence
US11080466B2 (en) 2019-03-15 2021-08-03 Ricoh Company, Ltd. Updating existing content suggestion to include suggestions from recorded media using artificial intelligence
CN110069256A (en) * 2019-04-23 2019-07-30 北京三快在线科技有限公司 Draw method, apparatus, terminal and the storage medium of component
US11429263B1 (en) * 2019-08-20 2022-08-30 Lenovo (Singapore) Pte. Ltd. Window placement based on user location
US11750672B2 (en) 2020-05-07 2023-09-05 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US11212127B2 (en) 2020-05-07 2021-12-28 Haworth, Inc. Digital workspace sharing over one or more display clients and authorization protocols for collaboration systems
US11956289B2 (en) 2020-05-07 2024-04-09 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US20240078142A1 (en) * 2022-09-02 2024-03-07 Dell Products, L.P. Managing user engagement during collaboration sessions in heterogenous computing platforms

Also Published As

Publication number Publication date
US11740915B2 (en) 2023-08-29
US20170199750A1 (en) 2017-07-13
US20230350703A1 (en) 2023-11-02
US20230376326A1 (en) 2023-11-23
US11886896B2 (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US11886896B2 (en) Ergonomic digital collaborative workspace apparatuses, methods and systems
US9430140B2 (en) Digital whiteboard collaboration apparatuses, methods and systems
US20110196864A1 (en) Apparatuses, methods and systems for a visual query builder
US10567481B2 (en) Work environment for information sharing and collaboration
US10540431B2 (en) Emoji reactions for file content and associated activities
US10338793B2 (en) Messaging with drawn graphic input
EP3014408B1 (en) Showing interactions as they occur on a whiteboard
WO2011029055A1 (en) Apparatuses, methods and systems for a visual query builder
US20160286028A1 (en) Systems and methods for facilitating conversations
CN106462372A (en) Transferring content between graphical user interfaces
US20210152561A1 (en) Compliance boundaries for multi-tenant cloud environment
WO2018010316A1 (en) Desktop page management method and device
CN105379236A (en) User experience mode transitioning
RU2768526C2 (en) Real handwriting presence for real-time collaboration
US20150169168A1 (en) Methods and systems for managing displayed content items on touch-based computer devices
Kukimoto et al. HyperInfo: interactive large display for informal visual communication
US20160117140A1 (en) Electronic apparatus, processing method, and storage medium
WO2014039670A1 (en) Digital workspace ergonomics apparatuses, methods and systems
US20170329793A1 (en) Dynamic contact suggestions based on contextual relevance
JP6293903B2 (en) Electronic device and method for displaying information
US20150324100A1 (en) Preview Reticule To Manipulate Coloration In A User Interface
JP2014238667A (en) Information terminal, information processing program, information processing system, and information processing method
US20170257459A1 (en) Cross-application service-driven contextual messages
WO2022188145A1 (en) Method for interaction between display device and terminal device, and storage medium and electronic device
US10162492B2 (en) Tap-to-open link selection areas

Legal Events

Date Code Title Description
AS Assignment

Owner name: HAWORTH, INC., MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REUSCHEL, JEFFREY JON;REEL/FRAME:031215/0994

Effective date: 20130916

AS Assignment

Owner name: PNC BANK, NATIONAL ASSOCIATION, AS ADMINISTRATIVE

Free format text: COLLATERAL ASSIGNMENT OF PATENTS;ASSIGNOR:HAWORTH, INC., HAWORTH, LTD. AND SUCCESSORS;REEL/FRAME:032606/0875

Effective date: 20140403

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: HAWORTH, INC., MICHIGAN

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK, NATIONAL ASSOCIATION;REEL/FRAME:052788/0497

Effective date: 20200528

Owner name: HAWORTH, LTD., MICHIGAN

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PNC BANK, NATIONAL ASSOCIATION;REEL/FRAME:052788/0497

Effective date: 20200528