#Personal Robot Application Meta
#Humanoid
#Being performance sugmentor
#Acting as human
#Reasoning
#Protecting
#Guarding assets
#Being friend
#Cooking
#Reading books
#Driving
#Reading map
#Fixing problems with cars
#Monitoring meficine taking
#Helping to use smart phone
#Remembering
#Managing banking
#Shoppping
#Orderering
#Tracking deliveries
#Receiving orders
#DeliverinReglamations
#Keeping tally
#Advising how to minimize taxes
#Advising
#Aiding to hear
#Reading news
#Tuning TV
#Managing home appliances
#Managing digital assets
#Taking care of pets
#Helping funerals
#Managing keys
#Engaging emotionally
#Maintaining eye contact
#Autonomous mission
#SLAM | Simultaneous Localization and Mapping
#Learning Management System (LMS)
#Robotic braces
#California wildfire | Challenges | Access roads too steep for fire department equipment | Brush fires | Dangerously strong winds for fire fighting planes | Drone interfering with wildfire response hit plane | Dry conditions fueled fires | Dry vegetation primed to burn | Faults on the power grid | Fires fueled by hurricane-force winds | Fire hydrants gone dry | Fast moving flames | Hilly areas | Increasing fire size, frequency, and susceptibility to beetle outbreaks and drought driven mortality | Keeping native biodiversity | Looting | Low water pressure | Managing forests, woodlands, shrublands, and grasslands for broad ecological and societal benefits | Power shutoffs | Ramping up security in areas that have been evacuated | Recoving the remains of people killed | Retardant drop pointless due to heavy winds | Smoke filled canyons | Santa Ana winds | Time it takes for water-dropping helicopter to arrive | Tree limbs hitting electrical wires | Use of air tankers is costly and increasingly ineffective | Utilities sensor network outdated | Water supply systems not built for wildfires on large scale | Wire fault causes a spark | Wires hitting one another | Assets | California National Guard | Curfews | Evacuation bags | Firefighters | Firefighting helicopter | Fire maps | Evacuation zones | Feeding centers | Heavy-lift helicopter | LiDAR technology to create detailed 3D maps of high-risk areas | LAFD (Los Angeles Fire Department) | Los Angeles County Sheriff Department | Los Angeles County Medical Examiner | National Oceanic and Atmospheric Administration | Recycled water irrigation reservoirs | Satellites for wildfire detection | Sensor network of LAFD | Smoke forecast | Statistics | Beachfront properties destroyed | Death tol | Damage | Economic losses | Expansion of non-native, invasive species | Loss of native vegetation | Structures (home, multifamily residence, outbuilding, vehicle) damaged | California wildfire actions | Animals relocated | Financial recovery programs | Efforts toward wildfire resilience | Evacuation orders | Evacuation warnings | Helicopters dropped water on evacuation routes to help residents escape | Reevaluating wildfire risk management | Schools closed | Schools to be inspected and cleaned outside and in, and their filters must be changed
#Embodied AI | AI system integrated into robot | Focus on real-world interaction | Adapting dynamically to physical surroundings
#Dexterous manipulation | Enabling robot to grasp and manipulate objects with precision and versatility, mimicking human-like capabilities
#Generalist robotic learning
#A-list celebrity home protector | Burglaries targeting high-end items | Burglary report on Lime Orchard Road | Burglar had smashed glass door of residence | Ransacked home and fled | Couple were not home at the time | Unknown whether any items were taken | Lime Orchard Road is within Hidden Valley gated community of Los Angeles in Beverly Hills | Penelope Cruz, Cameron Diaz, Jennifer Lawrence, Adele and Katy Perry have purchased homes there, in addition to Kidman and Urban | Kidman and Urban bought their home for $4.7 million in 2008 | 4,100-square-foot, five-bedroom home built in 1965 and sits on 1¼-acre lot | Property large windows have views of the canyons | Theirs is one of several celebrity properties burglarized in Los Angeles and across country recently | Connected to South American organized-theft rings
#Professional athlete home protector | South American crime rings | Targeting wealthy Southern California neighborhoods for sophisticated home burglaries | Behind burglaries at homes of professional athletes and celebrities | Theft groups conduct extensive research before plotting burglaries | Monitoring target whereabouts and weekly routines via social media | Tracking travel and schedules | Conducting physical surveillance at homes | Attacks staged while targets and their families are away | Robbers aware of where valuables are stored in homes prior to staging break-ins | Burglaries conducted in short amount of time | Bypass alarm systems | Use Wi-Fi jammers to block Wi-Fi connections | Disable devices | Cover security cameras | Obfuscate identities
#EMG | Electro Myo Graphy) | Measurement of electrical activity associated with activation of muscle group as detected by non-invasive electrodes on skin surface
#Multipurpose commercial humanoid | Potential for useful and reliable and affordable humanoids | Difficult problem making highly technical piece of hardware and software compete effectively with humans in labor market | Robots are not hard to build; but they are hard to make useful and make money with | Whole perception pipeline running at the framerate of sensors nowadays | All the technology is here now | Starting with surrogate robot from someone else to get autonomy team going while building own robot in parallel | Giving out a significant chunk of the company to early joiners | Combined efforts of the research community enables commercialization | Building team is really important
#Humanoid robots and fashion future | Shanghai, humanoid robots transcend fashion hype, reimagining design, challenging beauty norms, and unlocking metaverse opportunities | Convergence of fashion and technology | Human-machine collaboration in fashion | Genuine, emerging trend | Creativity, production, and human-machine interaction | Robots are becoming experimental platforms | Integration of robots into runway | Aesthetic Reinvention: designing beyond the human form | Fostering Human-Robot Collaboration From Runway to Production and Retail | Challenging Beauty Norms | Paving Way for Future Trajectories: The Metaverse of Fashion
#Large Language Model (LLM) | Foundational LLM: ex Wikipedia in all its languages fed to LLM one word at a time | LLM is trained to predict the next word most likely to appear in that context | LLM intellugence is based on its ability to predict what comes next in a sentence | LLMs are amazing artifacts, containing a model of all of language, on a scale no human could conceive or visualize | LLMs do not apply any value to information, or truthfulness of sentences and paragraphs they have learned to produce | LLMs are powerful pattern-matching machines but lack human-like understanding, common sense, or ethical reasoning | LLMs produce merely a statistically probable sequence of words based on their training | LLMs are very good at summarizing | Inappropriate use of LLMs as search engines has produced lots of unhappy results | LLM output follows path of most likely words and assembles them into sentences | Pathological liars as a source for information | Incredibly good at turning pre-existing information into words | Give them facts and let them explain or impart them
#Retrieval Augmented Generation. (RAG LLM) | Designed for answering queries in a specific subject, for example, how to operate a particular appliance, tool, or type of machinery | LLM takes as much textual information about subject, user manuals and then pre-process it into small chunks containing few specific facts | When user asks question, software system identifies chunk of text which is most likely to contain answer | Question and answer are then fed to LLM, which generates human-language answer in response to query | Enforcing factualness on LLMs
#Unitree R1 humanoid | Agile mobility: 24-26-DOF for adaptation to complex scenarios; its 2-DOF head enhances environmental perception | Lightweight structure, easy maintenance: ≤121cm agile form, ultra-lightweight at about 25kg, ready out-of-the-box to empower | Integrated with Large Multimodal Model for voice and images: Fully open control interfaces for joints and sensors, with support for mainstream simulation platforms | Height Width and Thickness(Stand): 1210x357x190mm | Degree of Freedom(Total Joints): 24 | Single Leg Degrees of Freedom: 6 | Single Arm Degrees of Freedom: 5 | Waist Degrees of Freedom: 2 | Head Degrees of Freedom: None | Dexterous Hand: NOT | Joint output bearing: Crossed roller bearings, Double Hook Ball Bearings | Joint motor: Low inertia high-speed internal rotor PMSM(permanent magnet synchronous motor,better response speed and heat dissipation) | Maximum Torque of Arm Joint: 约 2kg | Calf + Thigh Length: 675 | Forearm + Upper Arm Length: 435 | Joint Movement Space: Waist Joint:Y±150° R±30°, Knee Joint:-10°~+148°, Hip Joint:Y:±157° P:-168° ~+146° R:-60° ~+100° | Electrical Routing: Hollow + Internal Routing | Joint Encoder: Dual + single encoder | Cooling System: Local air cooling | Power Supply: Lithium battery | Basic Computing Power: 8-core high-performance CPU | Microphone Array: 4-Mic Array | Speaker: YES | WiFi 6 | Bluetooth 5.2 | Humanoid Binocular Camera | NVIDIA Jetson Orin Optional (40-100 Tops) | Smart Battery (Quick Release) | Charger | Manual Controller | Battery Life: about 1h | Upgraded Intelligency: OTA | Warranty Period: 8 Months
#Large Behavior Model (LBM) | Controlling the entire robot actions | Joint research partnership between Boston Dynamics and Toyota Research Institute | Collaboration aims to create a general-purpose humanoid assistant | Whole-body movements: walking, crouching, and lifting to complete tasks that involve sorting and packing
#AI generalist robot | Developing end-to-end language-conditioned policies | Taking full advantage of capabilities of humanoid form factor, including taking steps, precisely positioning its feet, crouching, shifting its center of mass, and avoiding self-collisions | Building policies process: 1. Collect embodied behavior data using teleoperation on both real-robot hardware and in simulation, 2. Process, annotate, and curate data to easily incorporate it into machine learning pipeline, 3. Train neural-network policy using all of the data across all tasks | 4. Evaluate the policy using a test suite of tasks | Policy maps inputs consist of images, proprioception, language prompts to actions that control robot at 30Hz | Leveraging diffusion transformer together with flow matching loss to train model | Dexterous manipulation including part picking, regrasping | Subtasks triggered by passing a high-level language prompt to the policy | Reacting intelligently when things go wrong | With Large Behavior Model (LBM), training process is the same whether it is stacking rigid blocks or folding a t-shirt: if you can demonstrate it, robot can learn it | Speeding up the execution at inference time without requiring any training time changes
#Teleoperation | High-Quality Data Collection for Model Training | Control system allows to perform precise manipulation while maintaining balance and avoiding self-collisions | VR headset for operators to fully immerse themselves in the robot workspace and have access to the same information as the policy, with spatial awareness bolstered by a stereoscopic view rendered using head mounted cameras reprojected to the user viewpoint | Custom VR software provides teleoperator with a rich interface to command robot, providing them real-time feeds of robot state, control targets, sensor readings, tactile feedback, and system state via augmented reality, controller haptics, and heads-up display elements | One-to-one mapping between user and robot (i.e. moving your hand 1cm would cause robot to also move by 1cm) | To support mobile manipulation, tracking on feet added and teleoperation control extended to support stance mode, support polygon, and stepping intent to match that of operator
#Policy | Toyota Research Institute.Large Behavior Model | Diffusion Policy-like architecture | Boston Dynamic policy | Diffusion Transformer-based architecture | Flow-matching objective | Conditioned on proprioception, images | Accepting language prompt that specifies objective to robot | Image data comes in at 30 Hz | Network uses a history of observations to predict an action-chunk | Observation space consists of images from robot head-mounted cameras along with proprioception | Action space includes joint positions for left and right grippers, neck yaw, torso pose, left and right hand pose, and left and right foot poses | Shared hardware and software across two robots aids in training multi-embodiment policies that can function across both platforms, allowing to pool data from both embodiments | Quality assurance tooling allows to review, filter, and provide feedback on data collected
#Simulation | Allows to quickly iterate on teleoperation system and write unit and integration tests | Performing informative training and evaluations that would otherwise be slower, more expensive and difficult to perform repeatably on hardware | Simulation stack is faithful representation of hardware and on-robot software stack | Ability to share data pipeline, visualization tools, training code, VR software and interfaces across both simulation and hardware platforms | Benchmarking policy and architecture choices | Incorporating simulation as a significant co-training data source for multi-task and multi-embodiment policies deployed on hardware