Call for Demos
Welcome to the 6G Summit, Abu Dhabi, UAE
3rd — 4th November 2022
Venue : W Abu Dhabi Yas Island
The Abu Dhabi 6G Summit invites submissions of proposals for demonstration sessions (demos), offering the opportunity for researchers from the region to showcase their ground-breaking prototypes and applications to a wide audience of local and international corporations and institutions. The demo sessions also serve as a unique opportunity to increase the international visibility of leading and emerging local research initiatives, opening the floor for live discussions among accomplished industry researchers and engineers around cutting-edge technologies and innovative solutions.
The potential demo proposals are expected to focus on topics that are in line with the advancement of 6G wireless networks. The following directions can be considered for the proposed demonstrations:
Novel 6G applications and technologies
Novel hardware and tools
New research prototypes
Testbeds
Platforms for research and real-world deployments
Proof-of-concept demonstrations
List of accepted demos at the 6G Abu Dhabi 6G Summit 2022
Nikolaos Giakoumidis
New York University Abu Dhabi
Email: giakoumidis@nyu.edu
Demonstrator description:
The demonstration consists of two multifunctional quadrupedal robotic platforms that have been developed to be utilized in a wide range of applications and industries such as Search and Rescue missions, Industrial inspection, Construction inspection, Topography, Surveillance, etc.
The quadrupedal robots are modular and can be configured quickly at a hardware and software level to adapt to specific application needs. The robots are equipped with payloads such as Robotic Arms, Lidars, Communication modules, AI units, Perception systems, etc. and in combination with the articulated motion of the robots, allows them to navigate into complex and dynamic environments such as Construction sites, Mines, Industrial buildings, etc. to collect data, and interact with the physical environment.
The robots can operate individually or in collaboration to complete a task. For example, during an exploration mission, the first robot equipped with a Robotic Arm opens a door for the second robot fitted with a 3D Scanner to pass through and map a new area.
The Hardware of the robots and payloads relies on a multilayer Software structure in which the lower-level algorithms control the Kinematics, Locomotion, Data acquisition, and high-level AI algorithms to make decisions for autonomous operation.
In addition, this multilayer control algorithm structure simplifies the robot's operation during teleoperation. The Operator of the robot can focus on the task by sending high-level requests such as “go to a specific location”, “find an object”, “turn a valve”, “open a door”, etc. and the robot will take into consideration all the necessary variables to execute the request.
An essential component of these robots is the wireless communication system based on a Mobile Ad Hoc Network (MANET).
The communication modules are creating a long-range, self-formed, self-healed, high bandwidth network capable of keeping the robots that are streaming a large amount of data such as 3D Maps, Photos, Videos, Measurements, etc. interconnected.
In addition to the above functionalities, the parameters of the physical communication layer, such as the operation Frequency, Transmission Power, Bandwidth, and Multiple-Input Multiple-Output (MIMO) topology, can be configured to adapt to the operating environment. The robots can carry additional communication modules as payloads, which can be placed in critical positions to extend the communication range or overcome an impenetrable obstacle.
Demo 2: Demonstrator Title: Smart Door Lock for Sustainable Smart Cities using 6G
Technological advancement in 6G Networks has led to the revolution of a new lifestyle, among which Smart Houses are typical examples. Smart houses build on the technological foundation of EVERYTHING smart, such as air conditioning systems, temperature systems, sound systems, and security systems. As technology is such an integral part of our daily lives, it’s no surprise that consumers are increasingly converting their homes into smart homes; resultantly, its expected market share is set to double by 2025. In a smart house, automated and intelligent door lockers are among the significant components for enhanced security and user experience.
In this demo, developing an intelligent lock using cutting-edge technologies for improved security and user experience is of particular interest. Smart locks are a relatively new and evolving technology. Our Intelligent lock is Wi-Fi enabled smart home device that controls locking and unlocking via face recognition and a mobile app; so that homeowners will have not to carry keys anymore. Remote access to let in a guest while you're at work, keeping a log of access, and liveliness detection are among the distinguishing features of our intelligent door lock product.
Our product is a computer-vision-based solution that employs face recognition to control the access of the door against those who attempt to gain access. Every user first registers his/her details via the mobile app. After signing up, the user is required to provide his/her image and an embedding of the face is stored in the form of a vector, which later is used for comparison purposes. Only then the user can claim a door by ID, and the user will be associated with the claimed door as an owner. User details will be stored in a central database. In the door, we use a high-quality camera (attached to Raspberry Pi in the prototype), which runs a python script, in order to detect a person approaching the door. The camera assesses each frame and when a face is detected, the frame is sent to the server, where a vector of the frame is created and set ready for comparison. On the server side, a comparison between the new vector against the one stored in the database is done to determine if the person attempting to get access has the right to get through. When a certain threshold of similarity is reached, the door will automatically open.
To this end, we are striving to cooperate liveness detection method, a technique used to detect a spoof attempt by determining whether the source of a biometric sample is a live human being or a fake representation. In the near future, as an extension of this work, we aim to implement it in a classroom attendance system, Conference Hall access monitoring, and smart security and alarm system.
Demo 3: Demonstrator Title: Demo on Energy Efficient Aerial Data Aggregation in IoT Networks Based on RF Wake-up
Author(s):
S. Mohammed, A. Alhejab, A. Abdelrahman and A. Al-Radhwan are with King Fahd University of Petroleum and Minerals, Saudi Arabia.
H. Elsawy is with the School of Computing, Queen’s University, Kingston. hesham.elsawy@queensu.ca
Internet of things (IoT) systems are expected to transform the world through intelligent computing, sensing, and automation. Nevertheless, the ubiquitous deployment of IoT is always compromised by the limited batteries of devices and the ever-evolving IoT services still seek fully autonomous things without energy constraints. Thanks to the hardware and computing advancements offered by the sixth generation (6G) of wireless systems and unmanned aerial vehicles (UAVs), UAV-enabled systems are being proposed as promising solutions to a number of challenges associated with IoT networks, hence, attracting the attention of both academia and industry. Moreover, UAVs can be used to collect data from IoT devices deployed in remote locations and supplied with limited batteries, reducing the energy consumption of IoT devices as they will no longer need to transmit over long distances, which enhances the reliability of the network and extend its lifetime. Wake-Up Radio (WuR) emerged as a potential technique to reduce the energy consumption of IoT devices. WuR solutions enable on-demand wake-up of the IoT device through the identification of a radio frequency (RF) wake-up call (WuC), carrying a specific address of the intended IoT device. WuR has shown great benefits in extending the operational lifetime of IoT networks and reducing the latency compared to conventional duty cycling techniques. Following the recent improvements of WuR technologies and UAVs, we propose to integrate the WuR technology with UAVs to further enhance the energy efficiency of IoT devices. Specifically, we design, develop, test, and optimize a UAV-enabled IoT data aggregation solution supported by WuR. The proposed solution entails utilizing additional hardware components on the UAV and the IoT device so it can be triggered on-demand by receiving a WuC from the UAV that activates the devices and collects the sensed data. To this extent, this demo aims at presenting a real experimental testbed to validate the feasibility of the proposed solution and highlight the gains it brings, in terms of reduced energy consumption and extended lifetime, for perpetual operation of IoT devices in 6G networks. The experimental testbed includes a UAV with an onboard Wu-Tx and a data transceiver, one IoT device connected to several sensors and integrated with an efficient wake-up receiver (Wu-Rx), and a web application to display collected data.
IoT device: The designed energy-efficient IoT device is only 7 × 7 𝑐𝑚2 and works in power-down mode during the whole deployment period until prompted by a WuC from the UAV, drawing less than 100 μA while idle. The IoT device switches to a low-energy sleep mode whenever it is not performing a useful task and only switches to the active state when it receives an external wake-up call (WuC) from a UAV. Upon successful activation, the IoT device sends its sensed data to the UAV and switches again to the sleep mode to conserve energy. The IoT device consists of an ATMEGA328PU MCU connected to a set of sensors (here we focus on agricultural sensors), an NRF24L01 transceiver, a commercial low range AS3933 Wu-Rx, an antenna, and a range extension frequency-tunable circuit (REFTC). The data transceiver uses a separate communication channel than the channel used for wake-up to send data from the IoT device to the UAV, which ensures stability and diminishes any possible interference. To generate the WuC, we implement the sub-carrier modulation (SCM), which consists of a high-frequency signal that emulates a modulated 32 kHz signal containing the address of the IoT device to be activated. The REFTC is integrated as an input stage to the AS3933 circuit that operates at 15-150 kHz, and is used to extract the address of the IoT device from the WuC. The main role of the REFTC is to down-convert the WuC from the MHz to the kHz frequency band, for which the AS3933 circuit can recognize, by means of a rectifier and a low pass filter. If the address of the IoT device is detected, a trigger is generated to interrupt the MCU and activate the IoT device for data collection. The developed IoT device shows a high wake-up range, reaching up to 20 m when tested at a transmit power of 12 dBm.
UAV and Web application: A custom assembled UAV is used to activate the IoT device and collect sensed data. The UAV carries a CC1101 Wu-Tx, which can be programmed to send WuCs to specific devices according to predefined addresses. Furthermore, an NRF24L01 data transceiver is mounted onboard to communicate the aggregated data with a base station that uploads them to a cloud database for further processing. A web application is developed to visualize the instantaneous collected data by the UAV enabling real-time observations and faster decisions. Due to the difficulty of flying a UAV in the demo presentation, we might resort to emulating the drone by the Wu-Tx and the data transceiver.
Information of the equipment used for the demo, space and setup required
As shown in the figure, the demo setup will consist of the Wu-Tx attached on the top of a relatively higher platform (to a UAV if possible). An artificial grass will be attached to the surface of the table to look simialr to an agricaultural field, on top of which the manifuctured IoT devices will be installed. The WuTx will send WuC to wake-up the IoT devices which will send their sensed data to the base-station. The base-station will show the collected data on the screen and upload it to an online datatset. A developed mobile application will be installed on a smartphone and will retreive the data from the online dataset and display it for the user in realtime. The demo guests will be able to use the mobile application to rereive the data from IoT devices on demand and watch the sensed data as they change in realtime. For example, the guests can apply water on the moisture sensor of the IoT device, and watch as the sensed value changes on the mobile application. An additional poster may be installed behind the stand if possible and an illustrating motion graphic will be played on the screen to help clearify the idea for the audeince and catch their attention.
Project Videos of experiments and explanatory motion graphic
Video 1 (Experiments and motion graphic):
Video 2 (Promotional video):
Demo 4: Demonstrator Title: Demo: Beam Training for Reconfigurable Intelligent Surfaces Enabled Real-Time Video Transmission Without Perfect CSI
Author(s):
Yutong Zhang, State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, China, yutongzhang@pku.edu.cn
Haobo Zhang, State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, China, haobo.zhang@pku.edu.cn
Ziang Yang, State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, China, yangziang@pku.edu.cn
Boya Di, State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing, China, diboya@pku.edu.cn
Hongliang Zhang, Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA, hongliang.zhang92@gmail.com
Demonstrator description:
Reconfigurable intelligent surface (RIS) has recently motivated its potential use to handle the exponentially increasing data transmission demands of future wireless networks in sixth generation (6G). Specifically, RIS is an ultra‐thin surface containing multiple nearly‐passive scattering elements with controllable electromagnetic properties, which enables exotic manipulations of the signals impinging upon it. Nevertheless, the accurate channel state information (CSI) of RIS‐aided communications is hard to acquire due to the passive characteristic and the large number of the elements. Hence, the conventional channel estimation methods may suffer extremely high complexity, and thus bringing the difficulty of practical implementations.
To tackle with this issue, in this demo, we first design a RIS prototype to shape the propagation environment as desired. Different from the traditional high‐complexity channel estimation methods, a beam training framework is then developed to enable RIS‐aided multi‐receivers (Rxs) transmission without the acquisition of accurate CSI, enabling a practical solution for real‐time video transmission. By performing the beam training, the incident signal can be re‐radiated towards Rxs accurately, leading to a high power of the received signals. On this basis, we deploy a RIS‐aided multi‐Rx platform to support real‐time video transmission which achieves a better performance than the conventional systems without RIS.
Scientific and Technical Description
we consider a downlink system consisted of a multi‐antenna Tx and two Rxs, with each Rx equipped with multiple antennas. Due to the complexity and dynamics of the wireless environment, the direct link between the transmitter (Tx) and Rxs are not stable and even suspended. To alleviate this issue, we deploy a RIS between the Tx and Rxs to reflect the incident signals with reconfigurable radiation elements. Nevertheless, the accurate CSI of RIS‐aided communication systems is hard to acquire due to the passive characteristic and the large number of the elements. Therefore, we develop a beam training framework to configure the RIS instead of acquiring accurate CSI, thereby supporting the RIS‐aided real‐time video transmission.
1. Reconfigurable Intelligent Surface Prototype Design: We design a RIS prototype where each RIS element includes the top microwave structure consisting of two metal patches and a PIN diode, two layers of substrate, the ground layer, and two via holes. It can be observed that the phase difference of the RIS element by changing the ON/OFF status of the PIN diodes is 180 degree at 5.5 GHz.
2. Beam Training Pipeline : The beam training for the RIS‐aided real‐time video transmission consist of two phases, i.e., traversing the codebook and feedback. Specifically, with a randomly initiated RIS configuration, the codewords in the codebook are performed sequentially to config the RIS under the control of the Tx. Given each codeword, Rxs receives the signal, records the index of the current codeword and the power gain of the received signal. After that, the codeword with the largest received power at each Rx is regarded as the optimal codeword which will be feedback to the Tx. By combining these optimal codewords of all Rxs, the Tx reconfigure the RIS to support the RIS‐aided real‐time video transmission.
Implementation
In this demo, at the working frequency 5.5 GHz, we implement a RIS‐aided system consisting of a Tx and two Rxs. The Rxs and the RIS are deployed 4.5m and 0.45m away from the Tx, respectively. The digital video broadcast‐ terrestrial (DVB‐T) standard is adopted for real‐time video transmission. Experimental results show that the received power at the Rxs with different codewords where the received power with the optimal codeword is 15dB higher than the minimal one. Moreover, two different videos are transmitted towards two Rxs with the assistance of the RIS, whose bit rate doubles than that of the conventional systems without the RIS. We can also observe that the received videos have the image jump, discontinuous, a mosaic without the RIS, which are improved by employing the RIS since signals are transmitted via a larger number of reflected links, implying that the spatial resources are better utilized.
Research Contributions
1. We design an RIS prototype and deploy a RIS‐aided multi‐Rx communication platform.
2. A beam training framework is developed to enable RIS‐aided multi‐Rxs transmission without the acquisition of accurate CSI, enabling a practical solution for real‐time video transmission.
Experimental results show that the bit rate of the transmitted video in this demonstration doubles than that of the conventional systems without RIS.
Demo 5: Demonstrator Title: Ubiquitous communication in drone constellations
Author(s):
Dr. Nikolaos Evangeliou, New York University Abu Dhabi, Center for Artificial Intelligence and Robotics (CAIR), nikolaos.evangeliou@nyu.edu
Dr. Athanasios Tsoukalas, Center for Artificial Intelligence and Robotics (CAIR), New York University Abu Dhabi, athanasios.tsoukalas@nyu.edu
Mr. Dimitrios Chaikalis, New York University New York, NYU Abu Dhabi Robotics and Intelligent Systems Control Lab (RISC),
Prof. Anthony Tzes, New York University Abu Dhabi, Center for Artificial Intelligence and Robotics (CAIR), anthony.tzes@nyu.edu
Demonstrator description:
Unmanned Aerial Vehicles technology has been in the forefront of Robotics research and technological development in the past decade. Drones are slowly transforming from laboratory prototypes to industrial commercial products, capable of carrying out search and rescue missions, surveillance, payload delivery, mapping and inspection and more.
The introduction of this new technology necessitates the need for new control algorithms and smart implementations to carry out the aforementioned tasks, while continuously monitoring of the aerial agents at long distances under regulatory framework requirements.
Under these disciplines, the Center for Robotics and Artificial Intelligence (CAIR) of New York University Abu Dhabi (NYUAD) has designed, fabricated, and programmed aerial agents that can carry out complex tasks, such as the ones mentioned above. It is worth mentioning that the control software is designed in-house and completely open source, whereas the hardware is assembled within NYUAD and bears next generation regulatory safety features, such as:
1. Autonomous parachute deployment
2. Propeller protectors
3. Up to 40km telemetry radios
4. Up to 20km pilot Remote-Control (RC)
5. Up to 3300’, 1080p60Hz video transmission with less than 1ms latency.
Building upon this flying and communication platform, the researchers of NYUAD have designed all-weather hybrid systems that can travel on air, water and land simultaneously and act as relays for a next generation 6G mesh network. For this we demonstrate two drones focusing on the selected theme: Ubiquitous communication in drone constellations. The first drone is a large octarotor with a 7DoF robot arm attached to it. This drone can carry up to 10Kgr and can be used as a relay mechanism for transmitting 6G signals. The second drone can act as a vehicle (using three omni wheels) or as a vessel (using two water jet thrusters). When in sea or in land, we are planning to attach solar cells in the styrofoam structure used for buoyancy to prolong its duration of its battery. At the same time, we are working on designing one power over tether solution that provides 2.5KW power to a flying drone, resulting in ubiquitous flight.
Extended reality (XR) is a game changing technology when it comes to combining two distinct environments: the real and the digital worlds. The XR-Based Unmanned Aerial Vehicle (UAV) control project aims at giving an immersive feel when controlling a UAV. Virtual reality (VR) applications are gaining much attention in many fields such as gaming, healthcare, and tourism. The XR-Based UAV control project is a technological fusion of many states of the art technologies to provide a unique user experience full of interactions when controlling a UAV. The control system is built using a VR headset that streams a 360◦ camera view to the user through a fully immersive web application, different vibrations senses through a haptic suit, and a virtualized gaming treadmill platform to translate the user movements into a hovering drone. The different control and streaming packets are routed through the 5G network to provide a system that could be used to experiment with the feel of flying a drone from anywhere in the world. The main contribution of this demo is to tackle the latency related to such applications that require real-time or near-to real-time interactions. This demo will demonstrate the full XR-based UAV remote control through a predefined scenario to highlight the implemented features of this system.
Setup overview
The demonstration system consists of a UAV equipped with a 5G modem and a 360◦ camera, a VR headset, and a gaming treadmill (Figures, 1 and 3). The system aims at controlling a UAV in a First Person View (FPV) remotely over the internet, which requires assuring acceptable streaming delay for real-time control. The global architecture of the proposed emulation and test bed is shown in Figure 1. The individual components of the system are briefly described as follows:
Drone payload: As shown in Figure 2, the Drone payload consists of an embedded single board that receives the user control commands, streams the 360◦ camera video, and transmits the vibrations, b) a 5G modem (Netgear Nighthawk M5) for network communication, and c) a 360◦ camera (Ricoh Theta Z1). The camera is directly connected to the embedded system through an Ethernet cable. The embedded software transmits the video stream to the Cloud server. The flight controller is linked to the embedded single-board computer through the USB port. The embedded single board computer acts as a relay to receive control commands sent by the user to the cloud server and transmit UAV position data along with channel and network metrics to the web application hosted on the cloud server.
Gaming Treadmill: This platform senses the user’s movement such as speed, heading, and height, and then publishes those values to the remote cloud server. The received user movements are mapped into the drone’s speed, yaw, and altitude. The Treadmill platform software prioritizes and filters the sensed values to ensure that the drone realizes smoothly the set of the different flight hovering.
Figure 1: Global Architecture
VR headset: The VR headset transmits the 360 camera FPV view to the user as well as the different drone sensors’ measurements. A web- based application is accessible through the VR web browser that is hosted on the remote cloud server. The VR headset controllers are configured to arm the drone and enable the gaming treadmill controls. The controllers are also used to change the camera view and angles. Cloud Server hosted services: The remote cloud server hosts two services a) The RTSP media server that restreams the camera frames. b), The MQTT Broker. This later provides several topics for the different sensors' measurements and the different controls.
Figure 2: Drone Payload
Figure 3: Demonstration Setup
Scenario
The demonstration will highlight a remote UAV control using the developed XR platform. During the demo, the drone will be located at the indoor flight arena (TII building), while the system user is located at another distant location. The user will perform multiple maneuvers to showcase the possible set of commands (takeoff, change heading, change speed, land ...) that could be sent to the UAV. A separate screen will be used to show the FPV view that the user is experiencing during the demo flight. As aforementioned the demo setup will include multiple devices to show the interactions between the different system components through the remote cloud services.