99久久99久久精品国产片,嫩草欧美曰韩国产大片,免费观看一级特黄欧美,制服丝袜一区二区三区,日韩免费视频在线观看免费

位置:首頁(yè) > 產(chǎn)品展示 > CAVE虛擬現(xiàn)實(shí)系統(tǒng)

產(chǎn)品展示

產(chǎn)品展示

ErgoVR沉浸式CAVE虛擬仿真實(shí)驗(yàn)室

產(chǎn)品型號(hào):ErgoVR CAVE

類   型:

CAVE虛擬現(xiàn)實(shí)系統(tǒng)

描   述:

ErgoVR沉浸式CAVE虛擬現(xiàn)實(shí)實(shí)驗(yàn)室可提供光環(huán)境與視覺(jué)模擬、聲環(huán)境與聽(tīng)覺(jué)模擬、氣味與嗅覺(jué)模擬、人機(jī)交互與觸覺(jué)反饋模擬、人機(jī)交互測(cè)評(píng)、人機(jī)環(huán)境測(cè)試、人機(jī)工效分析、人因設(shè)計(jì)與虛擬裝配、虛擬展示、虛擬訓(xùn)練等技術(shù)服務(wù)。

咨詢報(bào)價(jià) 預(yù)約體驗(yàn)

ErgoVR人機(jī)交互CAVE沉浸式虛擬仿真實(shí)驗(yàn)室由津發(fā)科技自主研發(fā)的ErgoLAB虛擬世界人機(jī)環(huán)境同步云平臺(tái)、CAVE虛擬現(xiàn)實(shí)系統(tǒng)、ErgoVR人機(jī)工效分析系統(tǒng)、ErgoHMI人機(jī)交互評(píng)估系統(tǒng)、美國(guó)WorldViz頭戴式行走虛擬現(xiàn)實(shí)系統(tǒng)等核心部件組成,CAVE洞穴式虛擬現(xiàn)實(shí)系統(tǒng)是一個(gè)大型的可支持多用戶的沉浸式虛擬現(xiàn)實(shí)顯示交互環(huán)境,能夠?yàn)橛脩籼峁┐蠓秶曇暗母叻直媛始案哔|(zhì)量的立體影像,讓虛擬環(huán)境完全媲美真實(shí)世界,為用戶提供光環(huán)境與視覺(jué)模擬、聲環(huán)境與聽(tīng)覺(jué)模擬、氣味與嗅覺(jué)模擬、人機(jī)交互與觸覺(jué)反饋模擬、人機(jī)交互測(cè)評(píng)、人機(jī)環(huán)境測(cè)試、人機(jī)工效分析、人因設(shè)計(jì)與虛擬裝配、虛擬展示、虛擬訓(xùn)練等技術(shù)服務(wù)。

ErgoVR虛擬現(xiàn)實(shí)同步模塊進(jìn)行視覺(jué)、聽(tīng)覺(jué)、嗅覺(jué)、觸覺(jué)和人機(jī)交互模擬,ErgoLAB人機(jī)環(huán)境同步云平臺(tái)由可穿戴生理記錄模塊、VR眼動(dòng)追蹤模塊、可穿戴腦電測(cè)量模塊、交互行為觀察模塊、生物力學(xué)測(cè)量模塊、環(huán)境測(cè)量模塊等組成。實(shí)現(xiàn)在進(jìn)行人機(jī)環(huán)境或者人類心理行為研究時(shí)結(jié)合虛擬現(xiàn)實(shí)技術(shù),基于三維虛擬現(xiàn)實(shí)環(huán)境變化的情況下實(shí)時(shí)同步采集人-機(jī)-環(huán)境定量數(shù)據(jù)(包括如眼動(dòng)、腦波、呼吸、心律、脈搏、皮電、皮溫、心電、肌電、肢體動(dòng)作、關(guān)節(jié)角度、人體壓力、拉力、握力、捏力、振動(dòng)、噪聲、光照、大氣壓力、溫濕度等物理環(huán)境數(shù)據(jù))并進(jìn)行分析評(píng)價(jià),所獲取的定量結(jié)果為科學(xué)研究做客觀數(shù)據(jù)支撐。

作為該套系統(tǒng)方案的核心數(shù)據(jù)同步采集與分析平臺(tái),ErgoLAB人機(jī)環(huán)境同步平臺(tái)不僅支持虛擬現(xiàn)實(shí)環(huán)境,也支持基于真實(shí)世界的戶外現(xiàn)場(chǎng)研究、以及基于實(shí)驗(yàn)室基礎(chǔ)研究的實(shí)驗(yàn)室研究,可以在任意的實(shí)驗(yàn)環(huán)境下采集多元數(shù)據(jù)并進(jìn)行定量評(píng)價(jià)。(人機(jī)環(huán)境同步平臺(tái)含虛擬現(xiàn)實(shí)同步模塊、可穿戴生理記錄模塊、虛擬現(xiàn)實(shí)眼動(dòng)追蹤模塊、可穿戴腦電測(cè)量模塊、交互行為觀察模塊、生物力學(xué)測(cè)量模塊、環(huán)境測(cè)量模塊等組成)

作為該套系統(tǒng)方案的核心虛擬現(xiàn)實(shí)軟件引擎,WorldViz不僅支持虛擬現(xiàn)實(shí)頭盔,還可為用戶提供優(yōu)質(zhì)的應(yīng)用內(nèi)容。結(jié)合行走運(yùn)動(dòng)追蹤系統(tǒng)、虛擬人機(jī)交互系統(tǒng),使用者最終完成與虛擬場(chǎng)景及內(nèi)容的互動(dòng)交互操作。

應(yīng)用領(lǐng)域

BIM環(huán)境行為研究虛擬仿真實(shí)驗(yàn)室解決方案:建筑感性設(shè)計(jì)、環(huán)境行為、室內(nèi)設(shè)計(jì)、人居環(huán)境研究等;

 

交互設(shè)計(jì)虛擬仿真實(shí)驗(yàn)室解決方案:虛擬規(guī)劃、虛擬設(shè)計(jì)、虛擬裝配、虛擬評(píng)審、虛擬訓(xùn)練、設(shè)備狀態(tài)可視化等;

 

軍工國(guó)防武器裝備人機(jī)環(huán)境虛擬仿真實(shí)驗(yàn)室解決方案:武器裝備人機(jī)環(huán)境系統(tǒng)工程研究以及軍事心理學(xué)應(yīng)用,軍事訓(xùn)練、軍事教育、作戰(zhàn)指揮、武器研制與開(kāi)發(fā)等;

 

用戶體驗(yàn)與可用性研究虛擬仿真實(shí)驗(yàn)室方案:游戲體驗(yàn)、體驗(yàn)類運(yùn)動(dòng)項(xiàng)目、影視類娛樂(lè)、多人參與的娛樂(lè)項(xiàng)目。

 

虛擬購(gòu)物消費(fèi)行為研究實(shí)驗(yàn)室方案

 

安全人機(jī)與不安全行為虛擬仿真實(shí)驗(yàn)室方案

 

駕駛行為虛擬仿真實(shí)驗(yàn)室方案

 

人因工程與作業(yè)研究虛擬仿真實(shí)驗(yàn)室方案

 

 

其用戶遍布各個(gè)應(yīng)用領(lǐng)域,包括教育和心理、培訓(xùn)、建筑設(shè)計(jì)、軍事航天、醫(yī)療、娛樂(lè)、圖形建模等。同時(shí)該產(chǎn)品在認(rèn)知相關(guān)的科研領(lǐng)域更具競(jìng)爭(zhēng)力,在歐美和國(guó)內(nèi)高等學(xué)府和研究機(jī)構(gòu)擁有五百個(gè)以上用。

1)、加州大學(xué)圣巴巴拉分校虛擬環(huán)境與行為研究中心

該實(shí)驗(yàn)室主要致力于心理認(rèn)知相關(guān)的科學(xué)研究,包括社會(huì)心理學(xué)、視覺(jué)、空間認(rèn)知等,并有大量論文在國(guó)際知名刊物發(fā)表,具體詳見(jiàn)論文列表。

2)、邁阿密大學(xué)心理與計(jì)算機(jī)科學(xué)實(shí)驗(yàn)室

研究領(lǐng)域:空間認(rèn)知

Human Spatial Cognition In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through extremely large computer generated virtual environments.

Research Project Examples Specificity of Spatial Memories When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes.

Nonvisual Egocentric Spatial Updating When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.)

網(wǎng)站鏈接 http://www.users.muohio.edu/wallerda/spacelab/spacelabproject.html

 

3)、加拿大滑鐵盧大學(xué)心理系

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT H8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Arrington Eye Tracker

研究領(lǐng)域:行為科學(xué)

Professor Colin Ellard about his research: I am interested in how the organization and appearance of natural and built spaces affects movement, wayfinding, emotion and physiology. My approach to these questions is strongly multidisciplinary and is informed by collaborations with architects, artists, planners, and health professionals. Current studies include investigations of the psychology of residential design, wayfinding at the urban scale, restorative effects of exposure to natural settings, and comparative studies of defensive responses. My research methods include both field investigations and studies of human behavior in immersive virtual environments.

網(wǎng)站鏈接  http://www.psychology.uwaterloo.ca/people/faculty/cellard/index.html  http://virtualpsych.uwaterloo.ca/research.htm http://www.colinellard.com/

 

部分發(fā)表論文: Colin Ellard (2009). Where am I? Why we can find our way to the Moon but get lost in the mall. Toronto: Harper Collins Canada.

Journal Articles: Colin Ellard and Lori Wagar (2008). Plasticity of the association between visual space and action space in a blind-walking task. Perception, 37(7), 1044-1053.

Colin Ellard and Meghan Eller (2009). Spatial cognition in the gerbil: Computing optimal escape routes from visual threats. Animal Cognition, 12(2), 333-345.

Posters: Kevin Barton and Colin Ellard (2009). Finding your way: The influence of global spatial intelligibility and field-of-view on a wayfinding task. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

Brian Garrison and Colin Ellard (2009). The connection effect in the disconnect between peripersonal and extrapersonal space. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

 

4)、美國(guó)斯坦福大學(xué)信息學(xué)院虛擬人交互實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Complete Characters avatar package

The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations.

Our research programs tend to fall under one of three larger questions:

      1. What new social issues arise from the use of immersive VR communication systems?

      2. How can VR be used as a basic research tool to study the nuances of face-to-face interaction?

      3. How can VR be applied to improve everyday life, such as legal practices, and communications systems.

 

網(wǎng)站鏈接: http://vhil.stanford.edu/

 

5)、加州大學(xué)圣迭戈分校神經(jīng)科學(xué)實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display

The long-range objective of the laboratory is to better understand the neural bases of human sensorimotor control and learning. Our approach is to analyze normal motor control and learning processes, and the nature of the breakdown in those processes in patients with selective failure of specific sensory or motor systems of the brain. Toward this end, we have developed novel methods of imaging and graphic analysis of spatiotemporal patterns inherent in digital records of movement trajectories. We monitor movements of the limbs, body, head, and eyes, both in real environments and in 3D multimodal, immersive virtual environments, and recently have added synchronous recording of high-definition EEG. One domain of our studies is Parkinson's disease. Our studies have been dissecting out those elements of sensorimotor processing which may be most impaired in Parkinsonism, and those elements that may most crucially depend upon basal ganglia function and cannot be compensated for by other brain systems. Since skilled movement and learning may be considered opposite sides of the same coin, we also are investigating learning in Parkinson’s disease: how Parkinson’s patients learn to adapt their movements in altered sensorimotor environments; how their eye-hand coordination changes over the course of learning sequences; and how their neural dynamics are altered when learning to make decisions based on reward. Finally, we are examining the ability of drug versus deep brain stimulation therapies to ameliorate deficits in these functions.

網(wǎng)站鏈接: http://inc2.ucsd.edu/poizner/index.html

論文列表: http://inc2.ucsd.edu/poizner/publications.html 

 方案特點(diǎn)

1、核心虛擬現(xiàn)實(shí)引擎 兼容多種三維應(yīng)用程序

系統(tǒng)內(nèi)置核心虛擬現(xiàn)實(shí)軟件引擎,能無(wú)縫支持多種三維應(yīng)用程序,快速獲取設(shè)計(jì)成果進(jìn)行展示與交互。

2、多通道技術(shù) 完美沉浸感

專利虛擬現(xiàn)實(shí)呈現(xiàn)技術(shù),實(shí)現(xiàn)畫(huà)面的無(wú)縫拼接和完美融合,呈現(xiàn)身臨其境的3D沉浸感。

3、自主研發(fā) 基于虛擬現(xiàn)實(shí)技術(shù)的人機(jī)環(huán)境定量評(píng)價(jià)為科研提供客觀數(shù)據(jù)支撐

自主研發(fā)的ErgoLAB人機(jī)環(huán)境同步平臺(tái),VR同步模塊基于沉浸式三維虛擬現(xiàn)實(shí)環(huán)境,實(shí)時(shí)同步采集多元數(shù)據(jù)并進(jìn)行定量評(píng)價(jià),客觀的定量統(tǒng)計(jì)分析結(jié)果對(duì)科學(xué)研究提供數(shù)據(jù)支撐。

4、完全自然狀態(tài)下的行走虛擬現(xiàn)實(shí)技術(shù)進(jìn)行人類行為研究采集數(shù)據(jù)進(jìn)行定量分析更真實(shí)。

整個(gè)實(shí)驗(yàn)室空間均為行走虛擬現(xiàn)實(shí)系統(tǒng)的實(shí)驗(yàn)場(chǎng)地,被試可以不受任何限制自由行走,模擬完全真實(shí)世界的行為,采集的數(shù)據(jù)更真實(shí)。

    ErgoHMI駕駛艙人機(jī)工效虛擬現(xiàn)實(shí)系統(tǒng)

    虛擬現(xiàn)實(shí)技術(shù)為我們發(fā)展一種變革性的具有良好生態(tài)效度和內(nèi)部效度的實(shí)驗(yàn)方法提供了契機(jī)。傳統(tǒng)的心理學(xué)實(shí)驗(yàn)往往犧牲生態(tài)效度來(lái)達(dá)到較高的外部效度。其次,虛擬現(xiàn)實(shí)技術(shù)使得心理學(xué)實(shí)驗(yàn)可以在自然的條件下進(jìn)行,從而更有效地開(kāi)展有關(guān)人類視知覺(jué)、運(yùn)動(dòng)和認(rèn)知等方面的研究。

    介紹 參數(shù)

訂購(gòu)請(qǐng)留言

您需要:

  • 獲取產(chǎn)品資料
  • 獲取解決方案
  • 設(shè)備預(yù)約體驗(yàn)
  • 實(shí)驗(yàn)室規(guī)劃設(shè)計(jì)
  • 制定招標(biāo)參數(shù)

姓名:

電話:

郵箱:

學(xué)院院系/企業(yè)名稱:

研究方向:

經(jīng)費(fèi)預(yù)算:

  • 50萬(wàn)以內(nèi)
  • 100萬(wàn)以內(nèi)
  • 200萬(wàn)以內(nèi)
  • 300萬(wàn)以內(nèi)
  • 500萬(wàn)以內(nèi)
  • 1000萬(wàn)以內(nèi)
  • 2000萬(wàn)以內(nèi)
  • 5000萬(wàn)以內(nèi)
  • 不限
推薦產(chǎn)品

QQ客服:

 4008113950

服務(wù)熱線:

 4008113950

公司郵箱:Kingfar@kingfar.cn

微信聯(lián)系:

 13021282218

微信公眾號(hào)

99久久99久久精品国产片,嫩草欧美曰韩国产大片,免费观看一级特黄欧美,制服丝袜一区二区三区,日韩免费视频在线观看免费