From Data to New Knowledge (D2K)
Interdisciplinary computational concepts, methodologies and tools for forming productively useful new knowledge from large masses of heterogeneous data.
- Deep knowledge acquisition to allow high level inferences
- Script knowledge extraction
- Multi-scale data abstraction
- Massive data processing
- Learning by reading / Machine reading: automatic, unsupervised understanding of heterogeneous multimedia documents. That means the formation of a coherent set of beliefs based on multimedia/multisource, corpus and a background theory
- Built the knowledge of a domain from data extracted of the web as example the history of a country (multi-scale vision
- In CAD systems, build automatically a maintenance manual for a new device, using data from data bases of devices having close parts (possibly extracted from the web), and maintenance rules
Green ICT, towards Zero Power ICT (G-ICT)
- Low consumption devices (new processor design, new computing paradigm)
- Energy efficient system (hardware and software), architecture
- Energy Harvesting
- The storage of large amount of data is more and more energy consuming due to the increase in the size of these data. New type of memory are foreseen (for instance resistive memories), that are non-volatile and will allow to shut-down the power in these memories. This raises questions especially for the low consumption exascale computing: - What kind of new architecture (neuromimetic, associative, data driven…)? - How to incorporate distributed computing capabilities among these sleeping memories?
- Distributed unattended sensor network that wake-up according their energy harvesting capabilities: when the number of sensor and the network topology are unknown, new advanced operating systems and architecture are needed. Especially if they are heterogeneous sensors, and the functions to perform collectively are a priori not well defined or unknown.