Siran Lei, Mengqi Cheng and Jianguo Jiang, Department of Mathematics, Liaoning Normal University, Dalian, China
The verification of the correctness of large programs, particularly operating systems is an unmanageable but important endeavor. We are interested in verifying C programs with formal methods, the logic is separation logic, a Hoare-style program logic. In this paper, we present a simple extension of the syntax of separation logic assertion on existing verification system in Coq proof assistant to make assertions more versatile and flexible to describe the state of programs. Moreover, we develop several tactics for proving some related assertions to reduce manual proof as much as possible and improve the efficiency of verification.
Separation Logic, Coq, Interactive Theorem Proving, Program Verification, Automated Reasoning
Howard Dittmer and Xiaoping Jia, School of Computing, College of Computing and Digital Media, DePaul University, Chicago, Illinois, USA
Over time the level of abstraction embodied in programming languages has continued to grow. Yet, most programming languages still require programmers to conform to the language's rigid constructs. These constructs have been implemented in the name of efficiency for the computer. The continual increase in computing power allows us to consider techniques that are no longer limited by this constraint. To this end, we have created CABERNET, a Controlled Nature Language (CNL) approach. CABERNET allows programmers to use a simple outline-based syntax. This allows increased programmer efficiency and syntax flexibility. CNLs have successfully been used for writing requirements documents. We propose taking this approach well beyond this to fully functional programs. Our approach uses heuristics and inference to analyze and determine the programmer's intent. The goal is for programs to be aligned with the way that the humans think rather than the way computers process information.
Controlled Natural Language, Literate Programming, Programming Language, Computer-Aided Software
Mr.Daren Chesworth, Glyndwr University, United Kingdom
The role of maintenance as a ‘non-value’ activity within an organization has evolved over the last half-century. Strategically, it has now become one of the most ‘Smart’ thinking areas of Engineering. This is due to the increased demands by industry to maintain high levels of efficient production whilst providing feedback on equipment performance. The new Industrial Revolution (Industry 4.0) has been driven by technology advances in sensor and wireless data transmission, which has been adopted by those who develop strategies within the organization to provide unprecedented levels of equipment data. Analysed correctly, equipment failures are predicted with greater accuracy which will reduce unplanned machine downtime within a system. This reduction in downtime will increase overall efficiencies which in turn will improve organization profits. This paper builds upon the work of Garg et al (2006) to contemplate whether the more frequently adopted maintenance strategies (often referred as ‘traditional’) remain applicable. This will then develop further to contemplate the use of Industry 4.0 tools and if they inclusion into a Maintenance Management Model and how accessible these new developments are to all industries.
Industry 4.0, IoT, CPS, Big Data, Cloud Computing, Maintenance, Strategy, Industrial Revolution
Sara Beitelspacher, Mohammad Mubashir, Kedir Mamo Besher and Mohammed Zamshed Ali, Erik Jonsson School of Engineering & Computer Science at The University of Texas at Dallas, Tx, USA
While routing through the Internet of Things (IoT) network, conventional healthcare data packets do not get any special priority routing treatment. IoT is facing performance issues in packet data transmission due to network congestion; with the conventional packet routing process, there is no guarantee that the patients’ health data will be properly routed to doctors on time. This leaves the remote medical treatment at risk with possible threat to patients’ lives. In this paper, we studied the current healthcare packet transmission process in IoT and its performance issues in congested IoT networks, then proposed a solution to prioritize healthcare data routing in congested IoT cloud networks. Our proposed system adds a healthcare data identifier in IP packet headers at the sensor level, modifies QoS software at the network router level, and prioritizes healthcare data packet routing based on the healthcare data identifier seen at router QoS. Test data shows promising results that may significantly benefit remote medical diagnostic process by saving time, cost, and more importantly, human lives. The analysis also opens up further avenues for research.
IoT, Priority, Congestion, Interference, Wireless sensor network
Mesmin J. Mbyamm Kiki, ZHANG. Jianbiao, Bonzou Adolphe Kouassi and Sakho Seybou, Beijing University of Technology, 100124 Beijing, China
Smart Cloud Manufacturing System (SCMS) security issues are the bottlenecks that limit the development of smart cloud manufacturing. First of all, based on the security risk matrix of the SCMS architecture, the existing SCMS security system has defects such as centralization, leakage of evidence, and insecurity, credibility of identity authentication, message authentication, and access control. According to the technical characteristics of encryption, development consensus, detrust and anonymity, collective maintenance, immutability, traceability, and programmable smart contracts, it can solve the fascination problem, and proposes a blockchain-based smart cloud manufacturing system security architecture (BCSCMS), and focused on the five aspects of security authentication based on blockchain-based identity authentication, message authentication, access control, evidence protection, and security auditing and early warning. Finally, from information confidentiality, complete annotation, and follow-up tide The four dimensions of sexuality and anonymity demonstrate the security of the framework.
Security Implementation, Blockchain, BCMS Security Architecture.
Ugochukwu O. Matthew2, Jazuli S. Kazaure1, Ibrahim M. Hassan2, Ubochi Chibueze Nwamouh3, 1Department of Electrical & Electronics, Hussaini Adamu Federal Polytechnic, P.M.B 5004, Kazaure, Jigawa State, Nigeria, 2Department of Computer Science, Hussaini Adamu Federal Polytechnic, P.M.B 5004, Kazaure, Jigawa State, Nigeria and 3Department of Computer Engineering, Michael Okpala University Agriculture, Umudike, Abia State , Nigeria
Enterprise Data Warehousing is a top level Strategic Business and Information Technology (IT) investment in any organization that is technologically, profit driven and customer oriented. Data Warehouses can be developed in two alternative ways through constructing Data Marts and the Enterprise-Wide Data Warehouse strategies, and each has advantages and disadvantages. To create a Data Warehouse, Data must be extracted from multiple heterogeneous source systems, Transformed, Cleaned and Loaded to an Appropriate Data Store. Depending on the business requirements, either relational or multidimensional database technology can be used for the data stores. To provide a multidimensional view of the data using a relational database, a star schema data model architecture is used for the data federation implementation. Online analytical processing (OLAP) can be performed on both kinds of Repository technology. The Metadata and Data attributes about the data in the Data Warehouse Repository is important for IT and end users specifications. Some reasonable number of database access tools and applications can be used in conjunction with Data Warehouse; which may include Structured Query Language (SQL) queries, Management Reporting Systems, Enterprise Query environments, Decision Support System/Executive Information System, Enterprise Intelligence Portals, Data Mining, and Customer Relationship Management for effective and optimizable Information Retrieval, Indexing and Enterprise Business Decision. The central aim of this paper is to develop an Enterprise Data warehouse to speed up Research, as well as connect Medical Researchers from around the world through the existence of the Medical Data Warehouses and Data Marts as Medical Solutions and Clinical information will be archived for daily use up to the next generation . This will enable focus to be narrowed as manpower multiplies in finding answers to certain medical mysteries. Standard operational Procedures will be improved as people will be allowed to access the web portals for the past Medical and clinical solution to all research articles wherever it existed.
Data Mining, Data Warehouse, Cloud Computing, Internet of Things (IOTs) & ICT
Yang Chen*, Rensheng Cui, Xiaoyi Zhu,Yinxing Zhou, Zhan Lin, Minghui Liu, (Institute of Earthquake Science, CEA, Beijing 100036, China)
The wireless sensor network is widely used in structural healthy monitoring system, because of its advantages such as low-cost, mobile and so on. The MQTT protocol is a proxy-based lightweight publish/subscribe messaging protocol with characters of low power consumption, openness, and simple operation, so it is suitable for the wireless sensor with limited resources. In this paper, a wireless sensor network scheme based on MQTT protocol for structural vibration monitoring was proposed. A data compression algorithm was defined to reduce the amount of data transfer and a data encryption was used to make sure the data transfer security.The wireless acceleration sensor and processing software system were developed based on this scheme. For testing the transmission performance, the five sensors were installed on a building, the MQTT serverand the processing software wereinstalled on the cloud server for receiving and processing the real-time waveform data continuously. For a three-direction acceleration sensor with sampling time of 0.005s,the amount of data transfer wasabout 600Bytes/s - 800Bytes/s and the transmission delay was about 0.4s/message~0.6s/message.
structural healthy monitoring system, wireless sensor network, MQTT protocol, acceleration sensor
David Raba1, Salvador Gurt2, Oriol Vila3, and Esteve Farres4, 1IN3 – Computer Science Dept., Universitat Oberta de Catalunya, Castelldefels, Spain 2Insylo Technologies Inc., Girona, Spain , 3Insylo Technologies Inc., Girona, Spain 4Insylo Technologies Inc., Girona, Spain
The animal feed supply chain to farm, mainly represented by the feed suppliers and livestock farm- ers, currently faces great inefficiencies due to outdated supply chain management. Stakeholders struggle with the timing and quantity evaluation when restocking their feed bins, significantly affect- ing cost and labour efficiency. However, the lack of accurate and cost-effective sensors to measure stock levels of solid materials stored in containers and open piles is preventing the implementation of these strategies in a large number of industrial sectors. In these cases, traditional technologies cannot offer a convenient solution due to an inevitable trade-off between accuracy and cost. This work develops an integral feedstock management system to optimise the entire supply chain. A new monitoring system based on an RGB-D sensor is presented as well as the data processing pipeline from raw depth measurements to bin specific daily consumption rates.
Inventory management, Vendor Managed Inventories, Internet of Things.
Donatien Niyonzima and Kriti Bhuju, Institute of Communication Studies, Communication University of China, Beijing, China
By using Big Data, Advanced Analytics and Artificial Intelligence, Chinese Media and communication companies have taken advantage of the potential of these technologies to improve their capability to know how to communicate to the world and optimize strategically their message and policy contents. AI offers huge promise for media companies – yet success to date has been reserved for the pioneering few. Hence, this study has tried to find out status of AI use in Chinese media and explore new media roles made possible by its use. Some of the most important tasks AI has helped in the journalistic works are object extraction and automated tagging, automated fact checking, content moderation, speech-to-text (mostly English for now) and ad targeting among others. The media in China has been using AI for all these works and are far ahead of media around the world in terms of AI adoption in media.
Artificial Intelligence, Media, Global Communication, Content Optimization, Automation
SINAYOBYE J. Omar1, MUSABE Richard1, UWITONZE Alfred1, NGENZI Alexander1 and NTAKIRUTIMANA Amini2, 1University of Rwanda ,Kigali and 2CMU-Africa, Kigali
Feature selection techniques show that more information is not always good in machine learning applications. Apply different algorithms for the data at hand and with baseline classification performance values, we can select a final feature selection algorithm. This paper proposes a correlation-based filter feature selection model using ensemble learning classifiers. The objective is to select relevant features and analyze the outperform learning algorithms in order to train our model, predict and compare their classification performance. In this method, features are ordered according to their Absolute correlation value with respect to the class attribute. Then top N Features are selected from the ordered list of features to form a reduced dataset. This proposed classifier model is applied to our smart meter datasets. To measure the performance of these selected features, four ensemble learning algorithms are used; Random Forest (RF), k-Nearest Neighbor (kNN), Decision Tree (DT) and Support Vector Machine (SVM). This paper then analyzes the performance of all learned algorithm with feature selection in terms of accuracy, sensitivity, F-Measure, Specificity, Precision, and MCC. From our experiment, we found that the Random Forest classifier performed higher than other used classifiers.
Feature selection, Ensemble learning Techniques, Feature Extraction, smart meter data sets
Adejoro O Cornelius, Francisca. N Ogwueleka, and Abraham E Evwiekpaefe, Department of Computer Science, Faculty of Military Science and Interdisciplinary Studies, Nigerian Defence Academy, Kaduna
The advancement in technological processing and the analysis of large and complex data offer new opportunities and challenges for police and security agencies. This research seeks to incorporate online Big data analytics to improve Nigeria military intelligence through the use of natural language processing mechanism. A model which takes care of named entity recognition and word sense disambiguation is built using machine learning techniques such as Naïve Bayes algorithm to classify and understand public concern or expectation for military missions in Nigeria. The analysis of twitter data is aimed at classifying Nigerian based tweets into positive and negative as it concerns military operations in Nigeria.
Leveraging, Classification, Military Intelligence and Natural Language Processing.
Samar Mouti and Samer Rihawi, Department of Information Technology, Khawarizmi College, Abu Dhabi, UAE
This paper describes the UAE Sign Language (USL) System in a real domain for deaf-mute people. USL system is useful for the interaction of the deaf with the community. The USL system is an expert system translates speech into UAE Sign Language. This system is composed of speech recognizer, natural language translator, and video generator. Speech recognizer for decoding the spoken utterance into a written text. A natural language translator for converting a written text into a sequence of videos of the UAE sign language using video generator.
Speech Recognition, UAE Sign Language, Python, Expert System, Natural Language Processing
Raja Muhammad Suleman and Yannis Korkontzelos, Department of Computer Science, Edge Hill University,Ormskirk, Lancashire L39 4QP, United Kingdom
Natural Language Processing (NLP) is a sub-field of Artificial Intelligence that is used for analysing and representing human language automatically. NLP has been employed in many applications, such as information retrieval, information processing, automated answer grading etc. Several approaches have been developed for understanding the meaning of text, commonly known as semantic analysis. Latent Semantic Analysis (LSA) is a widely used corpus-based approach that evaluates similarity of text on the basis of semantic relations among words. LSA has been used successfully in different language systems for calculating the semantic similarity of texts. However, LSA ignores the structural composition of sentences and therefore this technique suffers from the syntactic blindness problem. LSA fails to distinguish between sentences that contain semantically similar words but have completely opposite meaning. LSA is also blind to the syntactic structure of a sentence and therefore cannot differentiate between sentences and lists of keywords. In such a situation, the comparison between a sentence and a list of keywords without any syntactic structure gets a high similarity score. In this research we propose an extension of Latent Semantic Analysis (xLSA) which focuses on syntactic composition of a sentence to overcome LSA’s syntactic blindness problems. xLSA was tested on sentence pairs containing similar words but having different meaning. Our results showed that our extension allows LSA to manage the syntactic blindness problem to provide more realistic semantic similarity scores.
Natural Language Processing, Natural Language Understanding, Latent Semantic Analysis, Semantic Similarity.
Asad Abdi1 and Chintan Amrit2, 1Department of Industrial Engineering and Business Information Systems, University of Twente, Netherlands and 2Department of Operations Management, Amsterdam Business School, University of Amsterdam, Netherlands
Text summarization is the process of automatically creating a compressed version of a given document preserving its information content. This paper presents an optimization-based model where the objective function is a weighted combination of the content coverage and diversity objectives. An innovative aspect of our model lies in its ability to select salient sentences from given document sets and reduce redundancy in the summary. The optimization problem has been solved by using a differential evolution algorithm. Experiments on DUC2002 dataset displayed that the proposed model outperforms the other summarization methods.
Information Diversity, Optimization Model, Sentence Clustering, Text Summarization
Martina Toshevska, Frosina Stojanovska and Jovan Kalajdjieski, Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje, Macedonia
Distributed language representation has become the most widely used technique for language representation in various NLP tasks. Other representations, such as BoW and one-hot encoding, do not capture the context of the words and closeness between words. On the other side, distributed word representations have been proven very useful in many algorithms. In this paper, we explore different approaches for creating distributed word representations, commonly called word embeddings. We present an overview of different state-of-the-art word embedding methods. Furthermore, we analyse their performance on capturing word similarities with existing benchmark datasets for word pairs similarities.
Word Embeddings, Distributed Word Representation, Word Similarity
Winnie Ng1 and Vincent Cho2, 1Faculty of Business, The Hong Kong Polytechnic University, Hong Kong, Hong Kong and 2Department of Management and Marketing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
This paper uses a natural linguistics analytic approach, by studying product prelaunch events script, to investigate the determinants of driving the product sales. This research contributes to the theoretical framework of identifying the customer values impacting the product sales and the induced consumer psychology – driving optimism attitude and affective forecasting message during product prelease communications events to drive product sales growth. Through the pilot study of analysing the word counts from the script of Apple Inc. product prelaunch events, we found that product function and experiential/ hedonic of customer values drive product sales. Induced affective forecasting message negatively moderated the cost/ sacrifices values relationship with product sales. This research provides practical guidelines of how to shape the product prelaunch communications activities to maximize the sales of the to-be-released products.
Product Preannouncement, Product Sales, Signalling, Communications, Speech Recognition
Himanshu Choudhary, Vibhor Saxena, Sahil Sulekhiya, Sahib Lotey and Sumedha Seniaray, Delhi Technological University (Formerly Delhi College of Engineering), New Delhi, India
Machine Translation has been an active research field for more than a decade because of the inherent ambiguity and flexibility of human languages. Manual evaluation is time-consuming and subjective, therefore automatics metrics are most frequently used. There are various ways to check and evaluate the translation accuracy of the generated output. BLUE is one of the widely used evaluation metrics in machine translation literature. Some of these metrics (specially BLEU) have shown a good estimate of translation accuracy for foreign languages but are still not significant for morphologically rich dialects. In this paper, we have focused on Indian languages and shown that BLEU is not an suitable measure to estimate the translation quality of these languages. This paper evaluates the translation score for English-Hindi language pair using different metrics and human evaluation. The correlation between human scores and these metrics also identifies the relationship between each other. We built the regression model to predict the output of human scores and used statistical tools to analyze the relationships. We also observed that NIST and RIBES are way more nearer to human assessment and perform much better in order to assess quality of translated output for Indian languages rather than BLEU metric.
Machine Learning, Data analytics, Deep Learning, Indian Languages, Machine Translation, Natural Language Processing(NLP)
Manaswini R., Saikrishna B., Nishu Gupta, Electronics and Communication Engineering Department, Vaagdevi College of Engineering, Warangal, India
Internet of things (IoT) enables exchanging of various entities like smart health, smart industry and smart home development. These systems have already gained popularity and is still increasing day by day. Moving ahead, we need to develop a transport system all over the world with the help of IoT sensors technology. A new diversity leads to the introduction of Internet of Vehicles (IoV). IoV facilitates communication between Vehicle to Vehicle, Vehicle to Sensors and Vehicle to Roadside unit (RSU) with the help of artificial intelligence. It forms a solid backbone for Intelligent transportation systems (ITS) which gives further insight to the technologies that better explain about traffic efficiency and their management applications. In this article, a novel approach towards architecture model, frameworl, cross-level interaction, applications and challenges of IoV are discussed in the context of ITS and future vehicular scenario.
Artificial Intelligence, Cloud Computing, Internet of Vehicles, Internet of Things, Intelligent Transportation System, VANET
Mujahid F. AL-Azzo and Azzah T.Qaba, Electronic Engineering Department, Electronic Engineering collage , Ninevah university, Mosul-IRAQ
A mathematical model for localization of acoustical sources with separation between them is derived and presented . A classical ( Fourier transform ) method and a modern ,parametric , ( Burg ) method are used . The results show the capability of Burg method to resolve the adjacent sources when compared with Fourier transform method, as well as the localization of the sources . The performance is studies with varying some parameters relating to the problem.
Burg method, spectral estimation, location of sources
Shirokanev Alexander1,2, Kibitkina Alena2, Ilyasova Nataly1,2 and Zamyckij Evgeny3, 1IPSI RAS - branch of the FSRC «Crystallography and Photonics» RAS, Molodogvardejskaya street 151, Samara, Russia, 443001, 2Samara National Research University, Moskovskoe Shosse 34A, Samara, Russia, 443086 and 3Samara State Medical University, Chapayevskaya street 89, Samara, Russia, 443099
Diabetic retinopathy is frequent, the most dangerous fundus disease. Diabetic retinopathy can result in many serious diseases. For various reasons, patients lose vision in untimely or incorrect treatment of diabetic retinopathy. The current method of treating diabetic retinopathy is laser coagulation. The ophthalmologist decides which zones need to be shelled to reduce edema based on his experience. Laser radiation parameters and distance between laser shots are also selected based on the experience of previous operations. However, the accuracy of the selection of these parameters can affect the result of treatment. Achieving high accuracy is empirical difficult. The present paper proposes a technology for selecting an effective laser coagulation strategy consisting in application of a genetic efficiency optimization algorithm based on solving of the problems of mathematical simulation of laser burns. Technology solves the problem of choosing accurate laser coagulation parameters.
Diabetic retinopathy, laser coagulation, ocular fundus, information technology, mathematical modeling, thermal conductivity equation
Ahed Abugabah, Nishara Nizamuddin, Alaa Abugabah, College of Technological Innovation, Zayed University, Abu Dhabi, United Arab Emirates
The healthcare industry is progressively involved in adopting new technologies to provide improved quality of care given to patients. The implementation of RFID technology has globally impacted several industries and this revolution has improved the aspects of service delivery in the healthcare industry as well. The RFID technology has the potential to track medical assets and interact with almost any of the medical devices, pharmaceutical materials, IT equipment, or individual patients, deployed in hospitals all over the world. The motivation behind this paper is to investigate the advantages and obstacles to implement RFID technology in the healthcare sector highlighted in the literature. Further, we highlight the most possible methods or technologies to be adapted to overcome the limitations of implementation.
Jihane LARIOUI1 and Abdeltif EL BYED2, 1Laboratory LIMSAD, Hassan 2University, Faculty Of Science Ain Chock Casablanca, Morocco
Over time, intelligent transport systems (ITS) have shown their relevance and have become essential for good urban mobility management within smart cities. They facilitate traffic management and make flow more fluid and autonomous. Indeed, the data manipulated in the ITS is large and diver-sified which makes the information less exploitable and leads us to a consid-erable loss of time before obtaining the desired information .In the literature, no formal semantics describing all aspects relating to intelligent road transport systems have been proposed. Thus, the objective of this semantic approach is to localize the different resources of the WEB, to standardize the description models so that they are understandable and usable by the differ-ent agents of the system. In this article, we have proposed a new approach for the development of an architecture based on Multi- Agent systems (SMA) coupled with semantic Web services (SWS) and this in order to help decision- making in the context of urban mobility. Thus, We are implementing an ontology for the multi-modal transport system aimed at improving the management of urban mo-bility, covering all modes of transport and capable of managing planned trips as well as answering questions related to safety during travel.
Multi-agent System (SMA), Intelligent transport systems (ITS), Ontology, Semantic Web, Multi-modal Transportation
Henry Collier1 and Alexandra Collier2, 1College of Graduate and Continuing Studies, Norwich University, Northfield, Vermont, USA and 2Southern New Hampshire University, Manchester, New Hampshire, USA
Current practices to defend networks against threats involves hardening systems by limiting points of ingress into the system. The most common method of limiting ingress into a system is by limiting which ports are allowed through the firewall. Port limitation as a method of defense is normally effective. Ports in a firewall range from 0 through 65,535 and covers the technical aspects of information security. One method of ingress not covered by technical ports is the human port, coined “port Z3r0” for this paper. To better defend against port Z3r0, we must understand the human better and why they are susceptible. This paper explores the basic human behaviors related to susceptibility and identifies classifications of traits that increase a person’s susceptibility level. Additionally, this paper will address the issue of how the current model of teaching end users to defend themselves is lacking and needs to be improved.
Information Security, Non-Malicious Insider Threat, Susceptibility,Human Behaviors, Cognition
Rushil Mallarapu, Fairfield Ludlowe High School, 785 Unquowa Av., Fairfield, CT, USA
The development of novel synthetic methodologies is a main driving force in theoretical and practical organic chemistry and has applications in the production of innovative pharmaceuticals or agrochemicals. Retrosynthesis, a synthetic design methodology that systematically identifies bond disconnections in target molecules, is the root of all synthetic planning. Despite this, little has been done on the computational automation of retrosynthesis. Our research asked whether a deep-learning model could be developed to predict retrosynthetic disconnections with no template-based reaction rules. We report the successful development of the Odachi Retrosynthesis Engine, a graph convolutional network that can identify retrosynthetically similar clusters over molecules and find corresponding disconnections. The model uses spectral graph convolutions to identify topological synthetic contexts. We also develop a website (retrosynthesis.com) to host the engine and allow chemists to utilize the model for synthetic design. This work is the first application of graph convolutional networks to the retrosynthesis problem, and enables the development of efficient and advanced synthetic strategies.
Synthesis, Deep Learning, Cheminformatics, Graph Networks
Dr. Deepak Nandal1, Dr. Vikas Nandal2, 1Assistant Professor, Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar, Haryana, India, 2Assistant Professor, Electronics and Communicatios (UIET), Maharishi Dyanand University, Rohtak, Haryana, India
Cognitive Radio Networks (CRN) has emerged as a prevailing widespread technique in wireless networks. The last decade has witnessed an increasing demand for wireless radio spectrum. Users are being engaged by the services of a number of available wireless access systems. With Cognitive Radio, spectrum efficiency improves due to the utilization of additional spectrum ranges rather than only using the assigned spectrum range. IEEE 802.22 working group is developing a standard for a Cognitive Wireless Regional Area Network (WRAN) that will operate in unused television channels and provide fixed wireless access services. Spectrum sensing algorithms have received much attention for CR applications to increase the use of available spectrum. In this paper various spectrum sensing algorithms are compared in terms of spectrum resource and time and finally the technique with lowest probability of bit error rate is found out.
IEEE 802.22, Cognitive radio, incumbent, sensing, spectrum agility, energy efficiency, SNR
Puneet Bakshi and Sukumar Nandi, Indian Institute of Technology, Guwahati, Assam, India
In recent years, Government of India has taken several initiatives to make India digitally strong. Some of the initiatives in this direction are to provide unique digital identification (Aadhaar) number to residents, Aadhaar based online authentication and Aadhaar based online services such as DigiLocker and eSign. Although these initiatives helped India compete in digital revolution across world and were acclaimed by many, they have also raised some concerns about security especially the privacy aspects. One of the initiative in this direction is eSign which provides an online electronic signature service to its subscribers. Although most of the security aspects are addressed by eSign, privacy aspects are yet to addressed. For example, present model of eSign reveals identity and ekyc information of the signer which he may not want to disclose and infact may not be required also; what all might be required is just to provide a non-repudiable claim along with a proof of possession of certain attributes. Because of inherent limitations of PKI, it is difficult for the present model of eSign to address privacy concerns at the fundamental level. Recent developments in cryptography have introduced Attribute based Signatures (ABS) and Attribute based Encryption (ABE) which if used properly can address the privacy concern but the right implementation of it has always been a challenge. This paper presents a scheme to implement privacy enhanced eSign using attribute based signature. For the practical and efficient realization of the scheme, a token based approach is proposed.
eSign, Aadhaar, eSign, Privacy
Jiang Quan and Rao Wenbi, Wuhan University Of Technology, Wuhan 430070, China
At present, there are many new words expressing emotions on the Internet, but the expressions of these new words have rich meanings but lack of accurate definitions, so it is difficult to analyze their emotional tendentiousness, This thesis studies the feasibility and framework design of word2vec based analysis method of emotional neologisms' tendency, and conducts experiments on Weibo corpus. The results show that new words can analyze their emotional tendency from their similar words.
Word vector, New word discovery, Emotional word, Tendentiousness analysis, Word2Vec
Edmond Jajaga, Edi Hasaj and Fatlind Dautaj, Department of Computer Science, University for Business and Technology, Prishtina, Kosovo
A large fraction of Big Data is textual, and has spatial and temporal dimensions on it. Their management is a real challenge for traditional, non-GIS and GIS systems. The experimental evaluation is performed on measurement of the performance of five different databases with an open non-relational dataset. It was structured and tested separately in each store, giving some advantages and limitations to them from a practical point of view. The results are drawn based on the throughput per number of users executed respectively. It was loaded and executed more than a million of records in each and every database. Following its semi-persistent model, Redis performed better than other databases.
Non-relational data, Spatio-temporal data, Big Data, Cassandra, Couchbase, HBase, MongoDB, Redis.
Vivek Aswal, Vineet Sreeram, Aishwarya Kuchik, Simran Ahuja and Himali Patel, Department of Electronics and Telecommunication (EXTC) Engineering, Vivekanand Education Society’s Institute of Technology, Mumbai, India
Bidirectional Long Short Term Memory (LSTM) networks have been widely used for data classification and prediction. However, the use of LSTM predictions for data generation isn’t common. As it is difficult to collect labelled human sensor data, it is important to generate similar looking to replace the already existing training dataset due to privacy concerns. This extra generated data by prediction can be used for increasing the accuracy of the already existing dataset by incorporating a larger dataset with more variations. For finding out the similarity, the losses between the generated and the original data for each of the activities has been calculated. To evaluate the performance of the generated data, an activity classifier has been developed. Also, using the confusion matrix, the comparison between the increased dataset and the original dataset has been made. It visually shows that the generated data has helped to increase the overall accuracy.
Human Activity Recognition, LSTM, WISDM, Activity prediction, Activity generation & confusion matrix.
Venkata Duvvuri, Oracle Corp & Department of Technology and Leadership, Purdue University, IL, USA
In traditional SaaS enterprise applications, microservices are an essential ingredient to deploy machine learning (ML) models successfully. In general, microservices result in efficiencies in software service design, development, and delivery. As they become ubiquitous in the redesign of monolithic software, with the addition of machine learning, the traditional applications are also becoming increasingly intelligent. Here, we propose a portable ML microservice framework Minerva (microservices container for applied ML) as an efficient way to modularize and deploy intelligent microservices in traditional “legacy” SaaS applications suite, especially in the enterprise domain. We identify and discuss the needs, challenges and architecture to incorporate ML microservices in such applications. Minerva’s design for optimal integration with legacy applications using microservices architecture leveraging lightweight infrastructure accelerates deploying ML models in such applications.
Microservices, SaaS applications, Machine Learning, Oracle Cloud Infrastructure