序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
141 System for Occlusion Adjustment for In-Vehicle Augmented Reality Systems US15244975 2016-08-23 US20180059779A1 2018-03-01 Emrah Akin Sisbot; Kentaro Oguchi
The disclosure includes embodiments for providing occlusion adjustment for an in-vehicle augmented reality system. A system may include a three-dimensional heads-up display unit (“3D HUD”) installed in a vehicle. The system may include a memory storing instructions that, when executed, cause the system to: display, on the 3D HUD, a first graphic that is viewable by a driver of the vehicle as overlaying the interested object when the driver is looking at the 3D HUD; determine that at least a portion of the interested object is occluded by an occluding object; turn off the first graphic so that the first graphic is not displayed on the 3D HUD; and display, on the 3D HUD, a second graphic that does not overlay the occluding object and visually indicates to the driver the location of the interested object behind the occluding object when the driver is looking at the 3D HUD.
142 Filling in surround view areas blocked by mirrors or other vehicle parts US14927983 2015-10-30 US09902322B2 2018-02-27 Andreas U. Kuehnle; Cathy L. Boon; Zheng Li; Hans M. Molin
An imaging system, method, and computer readable medium filling in blind spot regions in images of peripheral areas of a vehicle. Intrinsic or extrinsic blind spot data is used together with vehicle movement data including vehicle speed and steering angle information to determine one or more portions of a series of images of the peripheral areas that include or will include one or more blind spot obstructions in the images. Portions of the images predicted to be obstructed at a future time, portions of overlapping images obtained concurrently from plural sources, or both, are obtained and used as an image patch. A blind spot region restoration unit operates to stitch together a restored image without the blind spot obstruction by merging one or more image patches into portions of the images that include the one or more blind spot obstructions.
143 Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle US14863727 2015-09-24 US09892519B2 2018-02-13 Camilo Vejarano; Julien Rebut
A method for detecting an object captured by a camera in an environmental region of a vehicle based on a temporal sequence of images of the environmental region is disclosed. An electronic evaluation device is used to determine at least one characteristic pixel of the object in a first image of the sequence of images, and the determined characteristic pixel is tracked in at least a second image and a flow vector each having a vertical component and a horizontal component is provided by the tracking. A first depth component, which is perpendicular to the vertical component and the horizontal component, is determined based on the vertical component, and a second depth component, perpendicular to the vertical component and the horizontal component, is determined based on the horizontal component. When the first and second depth component correspond within a tolerance range, a validated final depth component is provided.
144 METHOD, APPARATUS AND DEVICE FOR DETECTING LANE BOUNDARY US15279454 2016-09-29 US20180033148A1 2018-02-01 Jiatong Zheng; Yingfang Du; Wei Liu
A method, an apparatus and a device for detecting lane boundaries are provided. The method includes: obtaining a current image of a lane, and extracting brightness jump points in the image by filtering the image; filtering out noise points from the brightness jump points, and determining remaining brightness jump points as edge points to form groups of the edge points, where a connection line of edge points in the same group forms one edge line; recognizing edge lines of the lane boundaries from edge lines; and grouping the edge lines of the lane boundaries, and recognizing edge lines in each group as edge lines of one lane boundary. Based on the method, a calculation for detecting the lane boundaries is simpler, and calculation resources and time consumed in detecting the lane boundaries are reduced to detect the lane boundaries accurately and quickly.
145 VEHICULAR CONTROL SYSTEM WITH TRAILERING ASSIST FUNCTION US15722150 2017-10-02 US20180025237A1 2018-01-25 Sebastian Pliefke; Paul Jarmola; Thomas Wierich; Steven V. Byrne; Yuesheng Lu
A vehicular control system includes a camera having an exterior field of view at least rearward of the vehicle and operable to capture image data. A trailer is attached to the vehicle and image data captured by the camera includes image data captured when the vehicle is maneuvered with the trailer at an angle relative to the vehicle. The vehicular control system determines a trailer angle of the trailer and is operable to determine a path of the trailer responsive at least to a steering angle of the vehicle and the determined trailer angle of the trailer. The vehicular control system determines an object present exterior of the vehicle and the vehicular control system distinguishes a drivable surface from a prohibited space, and the vehicular control system plans a driving path for the vehicle that neither impacts the object nor violates the prohibited space.
146 IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, VEHICLE, IMAGING APPARATUS AND IMAGE PROCESSING METHOD US15546380 2016-01-28 US20170364765A1 2017-12-21 Takatoshi NAKATA; Tomo SHIMABUKURO
An image processing apparatus, an image processing system, a vehicle, an imaging apparatus and an image processing method for dynamically determining an image processing area in a captured image of vehicle's surrounding area are provided. The image processing apparatus, mounted on the vehicle, includes a processor configured to determine an image processing area on a captured image of a traveling path The processor is configured to perform processing to determine at least a part of an approximate line corresponding to a distal end of the traveling path in the captured image based on at least one of the luminance information and the color information of the captured image and processing to determine the image processing area based on a position previously determined relative to at least a part of the approximate line.
147 Vehicle-Mounted External Display System US15176637 2016-06-08 US20170355306A1 2017-12-14 Fernando Bellotti; D'Amelio Luciano Emmanuel; Maselli Chritian Dario; Luis Maria Sanchez Zinny
A terrestrial vehicle such as a truck has one or more video cameras mounted to capture a field of view that includes looking forward and ahead of the vehicle. The vehicle also includes one or more rear-mounted, rear-facing displays that are operably coupled to such a camera. Forward-looking images from the front side of the vehicle are presented on the aforementioned displays to thereby provide a viewer positioned behind the vehicle with that forward-looking image of the road ahead. So configured, drivers looking to pass such a vehicle are provided with real-time information regarding circumstances that can greatly affect the safety of such an action and can use that information to better inform their decisions and actions.
148 METHOD AND DEVICE FOR THE DISTORTION-FREE DISPLAY OF AN AREA SURROUNDING A VEHICLE US15679603 2017-08-17 US20170341582A1 2017-11-30 Markus Friebe; Felix Löhr
A camera surround view system for a vehicle includes at least one vehicle camera that supplies camera images processed by a data processing unit to generate an image of the surroundings. The image of the surroundings being displayed on a display unit. The data processing unit re-projects textures, which are detected by the vehicle cameras, on an adaptive re-projection surface, which is similar to the area surrounding the vehicle, the re-projection surface being calculated based on sensor data provided by vehicle sensors. The data processing unit adapts the re-projection surface depending on a position and an orientation of a virtual camera.
149 IMAGE PROCESSING DEVICE, IN-VEHICLE DISPLAY SYSTEM, DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM US15514187 2014-12-10 US20170301107A1 2017-10-19 Ryosuke SASAKI
In an image processing device (120), an extraction unit (121) extracts a plurality of objects from a captured image (101). A prediction unit (122) predicts a future distance between the plurality of objects extracted by the extraction unit (121). A classification unit (123) classifies into groups the plurality of objects extracted by the extraction unit (121) based on the future distance predicted by the prediction unit (122). A processing unit (124) processes the captured image (101) into a highlight image (102). The highlight image (102) is an image in which the plurality of objects classified by the classification unit (123) are highlighted separately for each group.
150 ROAD FEATURE DETECTION USING A VEHICLE CAMERA SYSTEM US15486638 2017-04-13 US20170300763A1 2017-10-19 Guangyu J. Zou; Jinsong Wang; Upali P. Mudalige; Shuqing Zeng
Examples of techniques for road feature detection using a vehicle camera system are disclosed. In one example implementation, a computer-implemented method includes receiving, by a processing device, an image from a camera associated with a vehicle on a road. The computer-implemented method further includes generating, by the processing device, a top view of the road based at least in part on the image. The computer-implemented method further includes detecting, by the processing device, lane boundaries of a lane of the road based at least in part on the top view of the road. The computer-implemented method further includes detecting, by the processing device, a road feature within the lane boundaries of the lane of the road using machine learning.
151 Image processing apparatus, imaging apparatus and drive assisting method US14395601 2013-04-16 US09789818B2 2017-10-17 Takatoshi Nakata; Takahiro Okada
An image processing apparatus includes an I/F, a synthesizer, and a color determinator. The I/F obtains a captured image, which is generated by imaging a subject in the vicinity of a vehicle. The synthesizer superimposes an indicator on the captured image. The color determinator, when color of the captured image is similar to a first color, changes color of the indicator from the first color.
152 Surrounding environment recognition device US15035812 2014-10-15 US09773177B2 2017-09-26 Masayuki Takemura; Masahiro Kiyohara; Kota Irie; Masao Sakata; Yoshitaka Uchida
A surrounding environment recognition device includes: an image acquisition unit that obtains a photographic image from a camera for capturing, via a camera lens, an image of a surrounding environment around a mobile object; an image recognition unit that recognizes an object image of an object present in the surrounding environment based upon the photographic image obtained by the image acquisition unit; an accumulation detection unit that detects accumulation settled on the camera lens based upon the photographic image obtained by the image acquisition unit; a suspension decision-making unit that makes a decision, based upon detection results provided by the accumulation detection unit, whether or not to suspend operation of the image recognition unit; a tracking unit that detects a characteristic quantity in a tracking target image from a specific area in an image captured by the image acquisition unit for a reference frame, determines through calculation an estimated area where the characteristic quantity should be detected in an image captured by the image acquisition unit for a later frame relative to the reference frame and makes a decision as to whether or not the characteristic quantity for the tracking target image is also present in the estimated area; and a resumption decision-making unit that makes a decision, based upon at least decision-making results provided by the tracking unit and indicating whether or not the characteristic quantity for the tracking target image is present, as to whether or not to resume the operation of the image recognition unit currently in a suspended state.
153 METHOD AND DEVICE FOR INTERPRETING A SURROUNDING ENVIRONMENT OF A VEHICLE, AND VEHICLE US15436018 2017-02-17 US20170243070A1 2017-08-24 Holger Janssen
A method is described for interpreting a surrounding environment of a vehicle. The method includes a step of providing an image of the surrounding environment of the vehicle, a step of forming an item of relational information using a first position of a first object in the image and a second position of a second object in the image, the item of relational information representing a spatial relation between the first object and the second object in the surrounding environment of the vehicle, and a step of use of the item of relational information in order to interpret the surrounding environment of the vehicle.
154 Obstacle alert device US14237451 2011-11-01 US09691283B2 2017-06-27 Tetsuya Maruoka; Akira Kadoya; Keigo Ikeda
An obstacle alert device notifies a driver of the presence of an obstacle approaching a vehicle without making it difficult to see a state of the periphery of the vehicle. The apparatus includes: a captured image obtainment unit that obtains a captured image of a scene of the periphery of the vehicle; a target captured image generation unit that generates a target captured image based on the captured image; an object presence determination unit that determines whether or not an object is present in an outside region that is on an outer side of the target captured image; a movement direction determination unit that determines a movement direction of the object in the outside region; and a notification image output unit that, in the case where the movement direction determination unit has determined that the object in the outside region is moving toward the center of the target captured image, sequentially displays a plurality of indicators, that appear for a set amount of time and then disappear, in different locations of the target captured image, starting with the side having the outside region in which the object is present and moving toward the center of the target captured image, and repeats this display while displaying the plurality of indicators in positions where the indicators partially overlap with each other, with the indicator displayed later being displayed over the indicator displayed immediately previous thereto at the areas where the indicators overlap.
155 METHOD FOR TRACKING A TARGET VEHICLE APPROACHING A MOTOR VEHICLE BY MEANS OF A CAMERA SYSTEM OF THE MOTOR VEHICLE, CAMERA SYSTEM AND MOTOR VEHICLE US15322225 2015-04-29 US20170162042A1 2017-06-08 Damien Dooley; Martin Glavin; Edward Jones; Liam Kilmartin
The invention relates to a method for tracking a target vehicle (9) approaching a motor vehicle (1) by means of a camera system (2) of the motor vehicle (1). A temporal sequence of images (10) of an environmental region of the motor vehicle (1) is provided by means of at least one camera (3) of the camera system (2). The target vehicle (9) is detected in an image (10) of the sequence by means of an image processing device (5) of the camera system (5) based on a feature of a front (11) or of a rear of the target vehicle (9) and then the target vehicle (9) is tracked over subsequent images (10) of the sequence based on the detected feature. Wherein at least a predetermined feature of a lateral flank (14) of the target vehicle (9) is detected in one of the subsequent images (10) of the sequence by the image processing device (5), and after detection of the feature of the lateral flank (14), the target vehicle (9) is tracked over further images (10) of the sequence based on the feature of the lateral flank (14).
156 DETECTING VISUAL INFORMATION CORRESPONDING TO AN ANIMAL US15366570 2016-12-01 US20170154241A1 2017-06-01 Yakov SHAMBIK; Noam YOGEV; Erez DAGAN
A system for detecting an animal in a vicinity of a vehicle is disclosed. The system includes at least one processor programmed to receive, from an image capture device, at least one image of the vicinity of the vehicle; analyze the at least one image using a truncated animal appearance template to detect visual information suspected to be associated with an animal, wherein the truncated animal appearance template corresponds to a portion of a body and one or more limbs of a suspected animal; and initiate a vehicle response based on the detected visual information of the suspected animal in the at least one image.
157 Perception-Based Speed Limit Estimation And Learning US14919540 2015-10-21 US20170116485A1 2017-04-27 Jonathan Thomas Mullen
Systems, methods, and devices for estimating a speed limit are disclosed herein. A system for estimating a speed limit includes one or more perception sensors, an attribute component, an estimator component, and a notification component. The one or more perception sensors are configured to generate perception data about a region near a vehicle. The attribute component is configured to detect one or more environmental attributes based on the perception data. The estimator component is configured to determine an estimated speed limit based on the environmental attributes. The notification component is configured to provide the estimated speed limit to an automated driving system or driver assistance system of the vehicle.
158 VEHICLE VISION SYSTEM CAMERA WITH ADAPTIVE FIELD OF VIEW US15286683 2016-10-06 US20170104907A1 2017-04-13 Harshad P. Rajhansa; Sai Sunil Charugundla Gangadhar
A vision system for a vehicle includes a camera configured to be disposed at a vehicle so as to have a field of view exterior of the vehicle. The camera includes a wide angle lens providing a field of view of the camera and the camera captures an image data set representative of the field of view of the camera. An image processor may process image data captured by the camera and may process a sub-set of the image data set representative of a sub-portion of the field of view of said camera that is less than the field of view of the camera. A display may display images derived from the sub-set of the image data set representative of the sub-portion of the field of view of the camera. The sub-set of the image data set for processing or display is determined based on steering of the vehicle.
159 VEHICLE VISION SYSTEM US15361748 2016-11-28 US20170072880A1 2017-03-16 Michael J. Higgins-Luthman; Yuesheng Lu
A vehicle vision system for a vehicle includes an image sensor having a field of view and capturing image data of a scene exterior of the vehicle. A monitor monitors electrical power consumption of the vehicle. At least one lighting system draws electrical power from the vehicle when operated. An image processor processes image data captured by the image sensor. The electrical power drawn by the at least one lighting system is varied at least in part responsive to processing of captured image data by the image processor in order to adjust fuel consumption by the vehicle.
160 DRIVE-BY CALIBRATION FROM STATIC TARGETS US15207667 2016-07-12 US20170032526A1 2017-02-02 Lingjun Gao; Heba Khafagy; Dave Wibberley
A method for deriving extrinsic camera parameters of a vehicle camera. Calibration markers are provided on a flat ground surface and the vehicle is driven past the calibration markers. Marker boundaries are detected and matched to stored pre-determined shape parameters and a marker shape is identified. At least one extrinsic parameter of the camera is derived using the tracked positions of identified marker shape in the video sequence captured while vehicle is moving, wherein the extrinsic parameter is selected from mounting positions and a rotation is selected from both horizontal axis and vertical axis of a vehicle coordinate system.
QQ群二维码
意见反馈