首页 / 专利库 / 摄影 / 照相机 / 运动相机 / Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system

Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system

阅读:12发布:2024-02-18

专利汇可以提供Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system专利检索,专利查询,专利分析的服务。并且The present invention generally relates to a method for capturing the 3D motion of an object, an unmanned aerial vehicle and a motion capture system.
The present invention discloses the use of a camera and a depth sensor onboard an Unmanned Aerial Vehicle (UAV) the trajectory of which is controlled in such a way that potentially occluded passive markers always remain in the field of view of the camera.
In that way, the camera onboard the UAV provides complementary localization means for some of the markers placed on the object that may occasionally, because of occlusions, not be visible to a sufficient number of cameras of the optical motion capture system during a motion capture session, and as a result not localizable by the camera setup of the motion capture system.,下面是Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system专利的具体信息内容。

A method for capturing the 3D motion of an object from the 3D locations of markers placed on this object, characterized in that the 3D location of at least one marker is determined from at least one image taken by a camera and from a depth information obtained from a depth sensor.Method according to the claim 1, wherein the camera and the depth sensor are moveable along a same motion trajectory determined in such a way that at least one marker placed on the object always remains in the field of view of the camera and the depth sensor, which are set up in such a way that their optical centers are close to one another.Method according to one of the preceding claims, wherein the 3D location of at least one marker placed on the object is determined from the 3D locations of at least another one markers (M) placed on an unmanned aerial vehicle.Unmanned aerial vehicle characterized in that it comprises a camera, a depth sensor and a controller for controlling its trajectory in such a way that at least one marker placed on an object always remain in the field of view of the camera and the depth sensor.Unmanned aerial vehicle according to the claim 4, wherein the camera C and the depth sensor DS are parts of a same device.A motion capture system comprising a camera setup, characterized in that the camera setup is completed by a camera and a depth sensor both onboard an unmanned aerial vehicle.A motion capture system according to the claim 6, wherein the unmanned aerial vehicle conforms to the claim 4 or 5.
说明书全文

1. Field of invention.

The invention generally relates to a method for capturing the 3D motion of an object. It relies on a motion capture system and an unmanned aerial vehicle.

2. Technical background.

The animation of computer-generated (CG) characters is an essential component of video game development, and is also increasingly used in the film-making industry. In order to obtain more realistic results, animations are often based on the performance of real actors, following a process known as performance-driven animation. In this process, the performances of the actors are analyzed and recorded by a motion capture system (often abbreviated as "mocap") and then transferred to the animation of the considered CG character.

Specifically, the invention is concerned with a motion capture system which consists of a camera setup and an acquisition system.

As illustrated on Fig. 1, the camera setup consists of a set of calibrated cameras positioned around the capture volume where actors or more generally objects, perform, in such a way that any point within this capture volume is seen by a minimum of 2 cameras, and preferably more.

The actors wear dedicated capture suits, on which passive markers are placed at the locations of the main body articulations. The actors play the roles of the film characters or virtual creatures to be animated inside the capture volume, as defined by the scenario. During their performance, the optical motion capture system tracks and records the locations of the markers in each of the camera images.

According to methods known from prior art, and described for instance in the paper by G.B. Guerra-Filhol entitled "Optical Motion Capture: Theory and Implementation", published in the Journal of Theoretical and Applied Informatics in 2005, the detected marker locations at each acquisition instant in the camera images are set in correspondence both temporally and spatially, to compute the sequence of 3D locations of each marker in the capture volume, for the duration of the capture.

An articulated body model of the actor is computed from this data at regular time intervals during the capture session, and retargeted on a pre-computed corresponding model of the CG character. The retargeting translates the tracking data acquired on the actor to the geometry of the body of the CG character. The tracked retargeted body model is eventually used to drive the animation of the virtual character.

An issue with such a motion capture process is that some of the markers occasionally get hidden to the cameras of the camera setup because of occlusions. An occlusion occurs whenever an element of the props, or part of an actor, stands in the line of sight between a camera and a marker. It sometimes happens that, for short periods of time, some of the markers are not visible to a sufficient number of cameras for their 3D localization in the capture volume to be performed accurately. In these situations, the missing parts of the affected markers tracks need to be reconstructed by human operators, a time-consuming and expensive process.

A well-known solution to this problem is to replace the passive markers by autonomous markers equipped with built-in inertial measurement units (IMUs). Knowing the initial localization of the marker at the start of the capture session, the IMUs can provide an estimate of the marker trajectory over time, without resorting to triangulation from the mocap setup cameras.

However, IMUs are known to suffer from severe drift problems, which likely degrade the accuracy of the estimated trajectories, and as a result the quality of the animation.

3. Summary of the invention.

The present invention solves at least one of the aforementioned drawbacks by using a camera and a depth sensor onboard an Unmanned Aerial Vehicle (UAV) the trajectory of which is controlled in such a way that potentially occluded passive markers always remain in the field of view of the camera.

In that way, the camera onboard the UAV provides complementary localization means for some of the markers placed on the object that may occasionally, because of occlusions, not be visible to a sufficient number of cameras of the optical motion capture system during a motion capture session, and as a result not localizable by the camera setup of the motion capture system.

Additional images of the markers which are not visible to a sufficient number of cameras of the camera setup of the mocap system are thus captured from the camera onboard the UAV. Accurate 3D localizations of the markers in the capture volume may then be performed without a time-consuming and expensive process (reconstruction by human operators) and without using autonomous markers equipped with built-in inertial measurement units (IMUs).

More precisely, the present invention relates to a method for capturing the 3D motion of an object from the 3D locations of markers placed on this object. The method is characterized in that the 3D location of at least one marker is determined from at least one image taken by a camera and from a depth information obtained from a depth sensor.

According to an embodiment, the camera and the depth sensor are moveable along a same motion trajectory determined in such a way that at least one marker placed on the object always remains in the field of view of the camera and the depth sensor, which are set up in such a way that their optical centers are close to one another.

According to an embodiment, the 3D location of at least one marker placed on the object is determined from the 3D locations of at least another one markers placed on an unmanned aerial vehicle.

According to another of its aspects, the invention relates to an unmanned aerial vehicle characterized in that it comprises a camera, a depth sensor and a controller for controlling its trajectory in such a way that at least one marker placed on an object always remain in the field of view of the camera and the depth sensor.

According to an embodiment, the camera C and the depth sensor DS are parts of a same device.

According to yet another of its aspects, the invention relates to a motion capture system comprising a camera setup, characterized in that the camera setup is completed by a camera and a depth sensor both onboard an unmanned aerial vehicle.

According to an embodiment, the unmanned aerial vehicle conforms to the claim 4 or 5.

The specific nature of the invention as well as other objects, advantages, features and uses of the invention will become evident from the following description of a preferred embodiment taken in conjunction with the accompanying drawings.

4. List of figures.

The embodiments will be described with reference to the following figures:

  • Fig. 1 shows schematically an example of a TV or film shooting studio,
  • Fig. 2 shows schematically a diagram illustrating a possible control scheme of the attitude and position of a UAV,
  • Fig. 3 shows a diagram of the steps of an embodiment of the method for capturing the 3D motion of an object,
  • Fig. 4 shows the steps of an embodiment of a step of the method for capturing the 3D motion of an object, and
  • Fig. 5 shows an example of an internal architecture of a device configured to control the navigation of an unmanned aerial vehicle.

5. Detailed description of a preferred embodiment of the invention.

The invention is not limited to the capture of the motion of an actor as illustrated by the TV or film shooting studio illustrated in Fig. 1, but may extend to any indoor or outdoor professional environment which is adapted to capture the 3D motion of an object from the 3D locations of markers placed on this object.

The gist of the invention is to complement the motion capture system by an extra camera and depth sensor mounted on an unmanned aerial vehicle. For the sake of comprehension, the principles of optical motion capture are briefly outlined below, followed by the description of an embodiment of the invention.

Fig. 1 shows an example of a TV or film shooting studio.

A TV or film shooting studio is a room equipped with a motion capture system which comprises a camera setup and an acquisition system.

The camera setup comprises cameras, here four referenced C1 to C4, and light sources, here three referenced L1 to L3.

The TV or film shooting studio is surrounded, at least partially, by walls which are painted in a uniform green or blue colour, so that mobile objects or actors or props filmed in the studio can be easily segmented out from the background of the studio using chroma keying. The studio needs to be large enough to hold the camera setup and make sure that the volume captured by this setup, called the capture volume, allows sufficient room for the props and the performance of the mobile objects and/or actors.

The motion capture cameras, here C1-C4, are positioned all around the capture volume usually in the center of the room, in such a way that any point within this volume is seen by a minimum of 2 cameras, and preferably more. The cameras must be synchronized, typically from an external genlock signal, and operate at sufficiently high frame rates (to avoid motion blur) and with sufficient resolution to accurately estimate the motion trajectories of the markers used for motion capture. Furthermore, the cameras are calibrated, both with respect to their intrinsic and extrinsic parameters, so that the location on a camera image of the projection of any 3D point of the motion capture volume in its viewing frustum, referenced in some 3D coordinate system SMC, can be accurately predicted.

Lighting in the TV or film shooting studio relies on a set of fixed light sources, here L1 to L3 that provides an ideally diffuse and uniform lighting within the capture volume.

The time-stamped video signals captured by the camera setup are transferred and recorded from each of the cameras to a storage device, typically hard disk drives, thanks to the acquisition system (not represented in Fig. 1). The acquisition system also features a user interface and software for controlling the operation of the cameras and visualizing their outputs.

Tracking the 3D motion of an object (or actor) equipped with markers using such a motion capture system is well-known from prior art, and follows the principles described for instance by G.B. Guerra-Filho in "Optical Motion Capture: Theory and Implementation", published in the Journal of Theoretical and Applied Informatics in 2005.

The tracking method comprises detecting the locations of the markers in the image captured by the cameras. This is straightforward, as markers, owing to their high reflectivity, appear as bright spots in the images. Next, spatial correspondences between the detected markers locations across camera images are established. A 3D point in the 3D coordinate system SMC having generated a detected location in a camera image lies on a viewing line going through this location in the camera image plane and the camera projection center. Spatial correspondences between detected locations across camera views, corresponding to the projections in the views of markers, can be determined by the fact that the above-defined viewing lines for each considered camera intersect at the location of the marker in the 3D coordinate system SMC. The locations and orientations of the image plane and projection center for each camera are known from the camera calibration data. Next, the detected marker locations set in correspondence, and thus corresponding to the projections of markers, are tracked over time for each camera image. Temporal tracking typically relies on non-rigid point set registration techniques, wherein a global mapping is determined between the distributions of marker locations between two consecutive images of the same camera in consecutive frames. Next, the marker tracks are labeled. This can be performed manually, or alternatively the labels can be set automatically. Automatic labeling can benefit from a known initial layout of markers, for instance, in the case of body motion capture, the "T-stance" where the person stands with legs apart and both arms stretched away from the body. Next, the captured data is post-processed, especially in order to fill holes caused by marker occlusion. This can be automated up to some point using priors from a model of the captured object (e.g., an articulated body model) that constrains the locations of the missing markers when most of the markers locations are known, but needs to be performed manually if too many marker locations are missing.

Optionally, specifically for body motion capture, an articulated human body is fitted to the 3D locations of markers at each frame, thus providing data for animating a virtual character (possibly after retargeting if the anthropometric proportions of the actor and the virtual character are different).

According to the invention, the camera setup of the motion capture system is completed by a camera and a depth sensor both onboard an unmanned aerial vehicle (UAV).

At least four non-planar markers M which are detectable by the motion capture system are placed on the UAV schematically represented in Fig. 1, where the UAV is represented by the four ovals and the markers M are represented by black filled disks.

The 3D locations of the markers M in the 3D coordinate system SMC are determined from images captured by the camera setup of the motion capture system as described above. A 3D coordinate system SUAV is defined for the UAV. A geometric transformation between the coordinate system SUAV of the UAV and the coordinate system SMC of the motion capture system can be computed because the locations of the markers in these two coordinate systems are known. Indeed, the 3D locations of the markers in SUAV are fixed as the markers are rigidly attached to the UAV and can be measured. The locations of the markers in SMC are provided by the motion capture system.

Let OM, AM, BM and CM be the centers of the markers M, OUAV, AUAV, BUAV and CUAV their respective coordinates in SUAV and OMC, AMC, BMC and CMC their respective coordinates in SMC. The change of coordinate system from SUAV to SMC involves a translation followed by a rotation in 3D space. The translation is represented by a 3x1 vector T and the rotation by an orthonormal 3x3 matrix R. Consider the vectors u = OMAM, v = OMBM and w = OMCM. The application of rotation R to the known values (uMC, vMC, wMC) of these vectors in SMC yields the known values (uUAV, vUAV, wUAV) of these vectors in SUAV. The coefficients of R are computed by inverting the 9x9 linear system resulting from these constraints. The coordinates MMC and MUAV respectively in SMC and SUAV of any of the marker centers M are linked by the relationMMC = R.(MUAV + T) that follows from the definition of the change of coordinate transformation. For a given marker, using the previously computed value of R and the known values of MMC and MUAV, this relation yields a linear 3x3 system of equations in T whose resolution provides the components of the translation.

According to the invention, illustrated in Fig. 1, the UAV comprises a camera C, a depth sensor DS and a controller for controlling its trajectory in such a way that at least one marker placed on an object always remain in the fields of view of the camera C and the depth sensor DS, which are set up in such a way that their optical centers are close to one another. The camera C and the depth sensor DS are rigidly attached to the UAV and therefore moveable along the same motion trajectory of the UAV.

According to an embodiment of the UAV, the camera C and the depth sensor DS are part of a same device.

According to an embodiment of the invention, the unmanned aerial vehicle is a drone.

A UAV is a lightweight unmanned aerial vehicle powered by multiple rotors, typically 4 to 8, running on batteries. The UAV is equipped with onboard electronics including processing means, an Inertial Measurement Unit and additional position and velocity sensors for navigation, and with means for wireless communication with a remote apparatus.

The navigation of a UAV can be controlled by a so-called navigation control method usually implemented on a remote station over a dedicated Application Programming Interface (API) which may provide access to low-level controls, such as the speeds of the rotors, and/or to higher-level features such as a target UAV attitude, elevation speed or rotation speed around the vertical axis passing through the UAV center of mass.

The navigation control method can be developed on top of this API in order to control the displacements of the UAV in real-time. The control can be performed manually from a user interface, for instance relying on graphical pads on a mobile device display. Alternatively, the navigation of the UAV can be constrained programmatically to follow a determined motion trajectory. This motion trajectory defines a target 3D position of the center of mass of the UAV in the 3D coordinate system SMC at each time instant after a reference start time.

The navigation control method can benefit from the positional estimates of the UAV provided by the motion capture system. Such a closed-loop feedback control of a UAV using the motion capture system is described, for example, in the paper entitled « The GRASP Multiple Micro UA V Testbed », by N. Michael et al., published in the Sept. 2010 issue of the IEEE Robotics and Automation Magazine, Sept. 2010. In this paper, the control of the UAV relies on two nested feedback loops, as shown on Fig. 2. The purpose of the loops is to ensure that the actual attitude and position values of the UAV match the target values determined by a target trajectory. Typically, this is obtained by continuously adjusting the control loop parameters in order to minimize the error between a measured and target values, as in well-known PID controllers (see the Wikipedia page on PID controllers, http://en.wikipedia.org/wiki/PIO_controller).

Into more detail, with reference to Fig. 2, a Position Control module takes as input, at each time instant t, the target 3D position of the UAV center of mass rT(t) and its estimated position r(t) in the 3D coordinate system SMC. According to the invention, the accurate estimates of r(t) provided by the motion capture system, owing to the markers M placed on the UAV, can advantageously be fed into the navigation control method, in order to improve the stability and accuracy of the motion trajectory following.

More precisely, a control loop within the position control module generates, as a function of the positional error rT(t)-r(t), the desired values of the attitude angles ϕdes(t), θdes(t) and ψdes(t) or the roll, pitch and yaw angles respectively, that stabilize the attitude of the UAV and ensure the desired linear displacement that compensates for the positional error. The Attitude Control module is a second, inner, control loop that generates the increments of the moments Δωϕ, Δωθ, Δωψ to be produced by the UAV rotors along the roll, pitch and yaw axes respectively, in order to obtain the desired attitude values. In addition, the position control module feeds the motor dynamics module with an extra moment ΔωF that results in a net force along the vertical axis at the center of gravity of the UAV, allowing the control of its altitude. The Motor Dynamics module translates Δωϕ, Δωθ, Δωψ and ΔωF into set point values for the rotor speeds, that are transmitted to the UAV via its communication means, so that the rotor speeds are updated over the API. Using a model of the UAV motors, the Motor Dynamics module translates the updates of the rotors speeds into net forces Ti applied to the UAV along the vertical axes at the location of each rotor, as well as into angular moments Mi along these same axes. From these forces and angular moments, a model of the UAV dynamics allows to compute, in the Rigid Body Dynamics module, the linear acceleration of the UAV and its angular accelerations (t), (t) and (t) in its body frame. These accelerations are fed back to the Position Control and Attitude Control modules, respectively, to provide the inputs to the control loops implemented in these two modules.

Note that the Position Control and Attitude Control loops use measurements, not represented on Fig. 2, from the Inertial Measurement Unit and the positional sensors mounted on the UAV, in order to estimate the UAV position and attitude at their inputs.

According to the invention, the method for capturing the 3D motion of an object from the 3D locations of markers placed on an object determines the 3D location of at least one marker from at least one image taken by the camera C and from a depth information (also called range) obtained from the depth sensor DS.

According to an embodiment of the method, the 3D location of at least one marker placed on the object is determined from the markers M.

Fig. 3 shows a diagram of the steps of an embodiment of such a method.

As explained in detail in the introducing part, one single marker, or more generally a small set of neighboring markers on the capture suit of an actor (or more generally placed on an object), is occulted for short periods of time. These markers are hereafter referred to as "markers of interest". The markers of interest are assumed to have been identified, for example, during rehearsal sessions of the motion capture, and marked with a specific color or textured pattern, so that their visual appearance differs from the other markers used for the motion capture. In the case where the set of markers of interest consist of several markers, each of them is preferably assigned a distinct color or texture pattern, so that each individual marker in the set can easily be detected in the image of the camera onboard the UAV.

At step 310, the UAV is positioned at its desired starting position. For example such a starting position is defined by an altitude ZUAV above the floor of the motion capture studio and the same horizontal location as the markers of interest.

This operation is preferably performed by manually operating a user interface for the control of the UAV navigation.

According to a variant of the step 310, the UAV is approximately positioned at the desired starting position by a manual navigation control, and a fine adjustment of its position is performed automatically based on the detection of the markers of interest in the image taken by the camera onboard the UAV, following the step 320 described below.

At step 320, the 3D locations MOI(SMC) of the markers of interest are determined in the 3D coordinate system SMC of the capture volume.

According to an embodiment of step 320, illustrated in Fig. 3, at step 321, the 3D locations MOI(SUAV) of N markers of interest are determined in the coordinate system SUAV.

First, the 2D locations of the N markers of interest are detected in the image of the camera.

Next, taking into account the range measurements of the N markers of interest provided by the depth sensor DS, their 3D locations MOI(SCAM) are computed in a coordinate system SCAM of the camera C. The coordinate system SCAM is defined by the image plane of the camera and its optical axis which is perpendicular to the image plane.

Finally, the 3D locations MOI(SUAV) of the N markers of interest are determined by converting the 3D locations MOI(SCAM) by help of a 3D geometric transformation T(SCAM;SUAV).

Fig. 4 shows the steps of an embodiment of the step 321. The 3D locations MOI (SUAV) in the coordinate system SUAV are determined as follows.

At step 3211, the image coordinates (xi,yi) of the N markers of interest with 1 ≤ i ≤ N are detected in an image taken by the camera onboard the UAV. This operation is straightforward using methods known from the state of art, as each of the markers in the set of markers of interest has been tagged with a distinctive color or texture pattern. If a distinctive color is used, detection can be performed by looking for regions of the image whose pixel colors match the known distinctive color. If a distinctive texture pattern is used, the detection can be performed by correlating the image with a set of image patches holding said texture pattern at various scales, and looking for the highest correlation peak over all scales.

At step 3212, the real-world coordinates (Xi, Yi, Zi) of the each of the markers of interest Mi in the coordinate system SCAM are computed from their image coordinates (xi,yi) determined in step 3211, and from their measured depth (also called range) ri provided by the depth sensor DS. Based on the known calibration matrix of the camera C, the image coordinates (xi,yi) of Mi are converted to physical coordinates in the retinal plane of the camera xiRPyiRP, using methods known from the state of art and described for instance in chapter 3 of the tutorial entitled «Visual 3D modeling from images», by M. Pollefeys, published in the Proceedings of the 2004 Vision, Modeling and Visualization Conference VMV'04. The distance between the optical centre of the camera C and its retinal plane is the focal length f of the camera, known from the camera calibration. The horizontal and vertical coordinates of Mi are in the camera coordinate system SCAM are then computed from straightforward trigonometry as Xi,=xiRP*Zi/f and Yi,=yiRP*Zi/f. The measured range ri of Mi is such that ri2=Xi2+Yi2+Zi2. Xi, Yi, and Zi are computed by solving this system of three equations with three unknowns.

At step 3213, the real-world coordinates of each of the markers of interest Mi are converted from the coordinate system SCAM of the camera to the coordinate system SUAV of the UAV by means of the known transform T(SCAM;SUAV) between the two coordinate systems. Indeed, because the camera C is rigidly attached to the UAV, the rotation and translation between SUAV and SCAM can be measured.

Referring again to Fig. 3, at step 322, the UAV is localized in the coordinate system SMC. To this purpose, the 3D locations of the markers (M) attached to the UAV are obtained from the motion capture system, thanks to the transform T(SUAV;SMC) between the coordinate systems SUAV and SMC obtained from the known locations of these markers in SUAV; and their positions in SMC..

At step 323, the locations MOI(SMC) of the markers of interest in the coordinate system SMC are determined by converting the 3D locations MOI(SUAV) computed at step 321 by help of the 3D geometric transformation T(SUAV;SMC) computed at step 322.

At step 330, the control of the UAV trajectory is performed from the 3D locations M(SMC) of the markers M. This step simply relies on the previously described UAV navigation control software to programmatically constrain the UAV to follow a determined trajectory. At any time during the motion capture session, the target position of the UAV within this trajectory is defined by its coordinates (XT,YT,ZT) in the coordinate system SMC:

  • (XT,YT) are set to the current horizontal location of the center of gravity of the markers of interest in the 3D coordinate system SMC, computed in the third module, offset by a predetermined vector (ΔXT, ΔYT). Let (XCAM, YCAM) be the horizontal offset of the optical center of the camera C in the coordinate system SUAV.

In a preferred embodiment, (ΔXT, ΔYT) is set to (-XCAM,-YCAM), which amounts to constraining the optical center of the camera C to be positioned just above the center of gravity of the markers of interest.

  • ZT is set to ZUAV, constraining the UAV to remain at a fixed height above the floor of the capture volume, this height being large enough to avoid disturbing the actors play, as explained previously.

According to a variant, the control of the UAV trajectory is performed manually by operating the user interface of the UAV navigation control.

At step 340, the locations MOI(SMC) of the markers of interest are transmitted to the optical motion capture system using, for example, the onboard UAV transmission means.

According to an embodiment, the steps 321-323 and 330 and 340 in Fig. 3 are performed at predefined time intervals for the whole duration of a motion capture session.

According to a variant, several UAVs, each equipped with a camera C, a depth sensor DS and a pattern of at least four non-planar markers for their localization in the capture volume, are used simultaneously during the steps 321-323 and 330 and 340. Each of the UAVs (i) is assigned a specific horizontal position target offset (ΔXT,i, ΔYT,i), and, optionally, a specific target altitude ZUAV,i in the capture volume, in order to provide different viewpoints on the markers of interest. The values of the (ΔXT,i, ΔYT,i) and the ZUAV,i are chosen to maintain a minimal separation distance between the UAVs, in order to avoid aerodynamic interference. Each of the UAVs independently computes the locations of the markers of interest in the 3D coordinate system SMC of the capture volume and transmits them to the motion capture system.

According to a variant of this variant, the consistency of this data is checked, outlying position estimates are rejected, and eventually, is computed a robust estimate of the positions of the markers of interest at each transmission instant from the consistent estimates provided by each UAV. This robust estimate may be computed, for instance, as the median value of the estimates provided by each UAV.

On Fig. 2-4, the modules are functional units, which may or not be in relation with distinguishable physical units. For example, these modules or some of them may be brought together in a unique component or circuit, or contribute to functionalities of a software. A contrario, some modules may potentially be composed of separate physical entities. The apparatus which are compatible with the invention are implemented using either pure hardware, for example using dedicated hardware such ASIC or FPGA or VLSI, respectively «Application Specific Integrated Circuit », « Field-Programmable Gate Array », « Very Large Scale Integration », or from several integrated electronic components embedded in a device or from a brand of hardware and software components.

Figure 5 represents an exemplary architecture of a device 50 configured to control the navigation of an unmanned aerial vehicle.

Device 50 comprises following elements that are linked together by a data and address bus 51:

  • a microprocessor 52 (or CPU), which is, for example, a DSP (or Digital Signal Processor);
  • a ROM (or Read Only Memory) 53;
  • a RAM (or Random Access Memory) 54;
  • an I/O interface 55 for reception of data to transmit, from an application; and
  • a battery 56

According to a variant, the battery 56 is external to the device. Each of these elements of Fig. 5 are well-known by those skilled in the art and won't be disclosed further. ROM 63 comprises at least a program and parameters. Algorithm of the method according to the invention is stored in the ROM 53. When switched on, the CPU 52 uploads the program in the RAM and executes the corresponding instructions.

RAM 54 comprises, in a register, the program executed by the CPU 52 and uploaded after switch on of the device 50, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.

The microprocessor 52, the I/O interface 54, the ROM and the RAM are configured to control the navigation of an unmanned aerial vehicle such as a UAV, i.e. these elements are configured to specify a target position of the unmanned aerial vehicle at each time instant, corresponding to a motion trajectory in the 3D coordinate system SMC which is defined from the 3D locations MOI(SMC) of markers of interest as above-explained. It is then possible to control the unmanned aerial vehicle (a UAV for example) in such a way that a part of it follows a motion trajectory in the coordinate system SMC at a predefined speed in order to capture images of markers which are not visible to a sufficient number of cameras of the camera setup of the mocap system.

According to a variant, the device comprises a Graphical User Interface 57 which is configured to allow a user to specify a target position of the unmanned aerial vehicle UAV at a time instant. The unmanned aerial vehicle UAV trajectory control is then operated and/or displayed from the Graphical User Interface 57 that can take the form for example of a joystick or a tactile interface, e.g., on a tablet.

The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.

Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications. Examples of such equipment include a post-processor processing, a pre-processor, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.

Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette ("CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory ("RAM"), or a read-only memory ("ROM"). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.

As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.

A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.

Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments.

Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.

While not explicitly described, the present embodiments and variants may be employed in any combination or sub-combination.

高效检索全球专利

专利汇是专利免费检索,专利查询,专利分析-国家发明专利查询检索分析平台,是提供专利分析,专利查询,专利检索等数据服务功能的知识产权数据服务商。

我们的产品包含105个国家的1.26亿组数据,免费查、免费专利分析。

申请试用

分析报告

专利汇分析报告产品可以对行业情报数据进行梳理分析,涉及维度包括行业专利基本状况分析、地域分析、技术分析、发明人分析、申请人分析、专利权人分析、失效分析、核心专利分析、法律分析、研发重点分析、企业专利处境分析、技术处境分析、专利寿命分析、企业定位分析、引证分析等超过60个分析角度,系统通过AI智能系统对图表进行解读,只需1分钟,一键生成行业专利分析报告。

申请试用

QQ群二维码
意见反馈