Accepted Papers

  • Network based Intrusion Detection System for Open vSwitch based Virtual Network using Snort
    Anusha Prakash, Priyal Vijayvargiya, Stuti Kumar, and Jaidhar C D, National Institute of Technology Karnataka, India
    Virtual Network has emerged due to the constraints in the physical network, in order to deliver additional values and create eciencies. Network virtualization complicates the process of monitoring network trac. Snort is an open source network-based intrusion detection and prevention system (NIDPS) which has the ability to perform real-time trac analysis on layer 3 and above headers. Among the various available virtual switches, Open vSwitch which is a multilayer switch provides a switching environment for virtualization along with providing support to standard management interfaces and protocols such as OpenFlow, sFlow etc. It also provides for e ective network automation through programmatic extensions. The various attacks on the Open vSwitch illustrate the need for an intrusion detection system for virtualnetwork. Layer 2 IDPS is comparatively underdeveloped leading to a vulnerable virtual network. This paper proposes approaches to a novel network based intrusion detection system for Open vSwitch based virtual network using Snort. The proposed system attempts to extend the traf c monitoring capability of Snort to Open vSwitch. Thus it strengthens the virtual network security by monitoring the network and preventing network anomalies.
  • Applications of The Erlang B And C Formulas To Model A Network of Banking Computer Systems – Moving Towards Green It and Performant Banking
    Florin-Catalin ENACHE and Adriana-Nicoleta TALPEANU,The Bucharest University of Economic Studies, Romania
    This paper surveys the contributions and applications of queueing theory in the field of banking data networks. We start by highlighting the history of IT and banks and we continue by providing information regarding the main prudential regulations on the banking area as Basel Accords and green IT regulations, that on one side generate more computing needs and on the other side promote conscientious use of the existing IT systems. Continuing with a background of the network technologies used in Economics, the focus will be on the queueing theory, describing and giving an overview of the most important queueing models used in economical informatics. While the queueing theory is characterized by its practical, intuitive and subtle attributes, the queueing models are described by a set of 3 factors: an input process, a service process and a physical configuration of the queue or the queueing discipline. The Erlang B and C mathematical definitions of formulas for a specific number of s servers, at the λ arrival rate, and the average service time will be described, used and confirmed by computer simulations of real queues usually found in the banking computing systems. The goal is to provide sufficient information to computer performance analysts who are interested in using the queueing theory to model a network of banking computer systems using the right simulation model applied in real-life scenarios, e.g. overcoming the negative impacts of the European banking regulations while moving towards green computing.
  • Using Social Development Environments in Introductory Computer Science Classrooms: a Case Study on SCI
    Hani Bani-Salameh, The Hashemite University, Jordan
    This article presents our experience of using a social software development environment called SCI to support teaching programming for software engineering and computer science courses. It describes the benefit from using a Social Development Environment (SDE), particularly the effect of integrating the social side on students for the computer science classroom. This article presents a study that tests the usability of the SCI social development environment. Also, it contributes to a test of the hypotheses that discuss 1) the effect of using collaborative virtual environments, and 2) the benefits from integrating social networking within the SCI development environment. It presents both qualitative and quantitative evaluations of the SCI’s environment effectiveness in a computer science classroom. User observation, informal discussions and feedback via a questionnaire gave promising facts about the system. Students have reported that the tool eases communication between them and their project partners, and that the tool presented them with passive awareness information that prevented them from affecting other’s work and conflict changes while working on the project artefacts.
  • Efficient Manageability and Intelligent Classification of Web Browsing History Using Machine Learning
    Suraj G and Sumantha Udupa U,PES University,India
    Browsing the Web has emerged as the de facto activity performed on Internet. Although Browsing gets tracked, the manageability aspect of Web browsing history is very poor. In this paper, we have a workable solution implemented by using machine learning and natural language processing techniques for efficient manageability of User’s browsing history. The significance of adding such a capability to a Web browser is that it ensures very efficient and quick information retrieval from browsing history which currently is very challenging. Our solution guarantees that any important websites visited in the past can be easily accessible because of the intelligent and automatic classification.In a nutshell, our solution-based paper provides an implementation as a browser extension by intelligently classifying the browsing history into most relevant category automatically without any user’s intervention. This guarantees no information is lost and increases productivity by saving lot of time spent revisiting websites that were of much importance.
  • Effects of The Different Migration Periods on Parallel Multi-Swarm PSO
    Şaban Gulcu1and Halife Kodaz2,1Necmettin Erbakan University,Turkey and 2Selcuk University, Turkey
    In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple computing resources are used simultaneously in solving a problem. There are multiple processors that will work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. The migration period is an important parameter in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the experiments and analysed the performance of PCLPSO using different migration periods.
  • A Proposal for A Variability Management Framework
    Nesrine Khalfallah, Sami Ouali and Naoufel Kraiem, Campus Universitaire de La Manouba, Tunisia
    Although the database schemas are an integral part of information systems, the use of software product lines has been mainly studied for the production of executable code. The impact on data management and in particular the database schemas are poorly documented and little studied in the literature. The paper is an attempt to explore some of the issues of the modeling and implementation of the variability in database through the use of disciplined approaches. We propose a variability management framework. Motivations for developing the framework are three fold: (a) to facilitate the comprehension of the discipline, (b) to classify and compare existing approaches to managing this discipline and (c) to extract their insufficiencies to propose an approach that remedy these insufficiencies and resolve problems relating to this discipline. Finally, we introduce seven different variability management approaches and their instantiation according to the variability management framework.
  • Electron Transport Characteristics In 4h-Sic Polytype Under High-Electric-Field
    Youcef Belhadji, Benyounes Bouazza and Ahmed Amine El-ouchdi,Universite Ibn Khaldoun, Algeria
    A non-parabolic three valleys model of the conduction band was used to study the electron transport in 4H-SiC polytypes. The model is implemented using an ensemble Monte Carlo method (EMC). It provides a detailed description of the electronic dynamic and the electrons behavior at high electrical fields in each considered valleys. The effects of temperature, phonons and ionized impurities scattering are also considered in the simulation. Simulations are performed for strong electric fields ranging 1MV to 6MV. We recorded a quick peak velocity of 3.9*107cm/s to a time less than 0.1 Pico-seconds.
  • Configuration of A Guidance Process For Software Process Modeling
    Hamid Khemissa and Mourad Oussalah,USTHB University, Algeria
    The current technology tend leads us to recognize the need for adaptive guidance process for all process of software development. The new needs generated by the mobility context for software development led these guidance processes to be adapted. This paper deals with the configuration management of guidance process or its ability to be adapted to specific development contexts. We propose a Y description for adaptive guidance process. This description focuses on three dimensions defined by the material/software platform, the adaptation form and provided guidance service. Each dimension considers several factors to develop a coherent configuration strategy and provide automatically the appropriate guidance process to a current development context.
  • Analysing Leader Election Protocols By Probabilistic Model Checking
    Xu Guo and Zongyuan Yang, East China Normal University, China
    Leader election algorithm has been studied intensively in recent years. This paper presents a quantitative analysis of a synchronous leader election algorithm using probabilistic model checking. In particular, this paper utilizes a leading probabilistic model checker PRISM, to simulate the algorithm executions. For certain number of processes, the algorithm will terminate with probability 1. From performance analysis perspective, this paper investigates execution efficiency of this algorithm. Properties to be considered is the expected round of leader election under certain circumstances. Moreover, as an important performance concern, energy consumption is also simulated in the experiment. This paper also extends the assumptions of the original algorithm to study the performance over unreliable channels. Experimental results show that the reliability of channels does have great effect on the performance of this algorithm.
  • Locating Program Faults Effectively and Efficiently
    Chenglong Sun and Tian Huang,Chinese Academy of Sciences, China
    Fault localization is time-consuming and difficult, which makes it the bottleneck of the debugging progress. To help facilitate this task, there exists many fault localization techniques that help narrow down the region of the suspicious code in a program. Better accuracy in fault localization is achieved from heavy computation cost. Fault localization techniques that can effectively locate faults also manifest slow response rate. In this paper, we promote the use of pre-computing to distribute the time-intensive computations to the idle period of coding phase, in order to speed up such techniques and achieve both low-cost and high accuracy. We raise the research problems of finding suitable techniques that can be pre-computed and adapt it to the pre-computing paradigm in a continuous integration environment. Further, we use an existing fault localization technique to demonstrate our research exploration, and shows visions and challenges of the related methodologies.
  • Smart As A Cryptographic Processor
    Saroja Kanchi, Nozar Tabrizi and Cody Hayden, Kettering University, USA
    SMaRT is a 16-bit 2.5-address RISC-type single-cycle processor, which was recently designed and successfully mapped into a FPGA chip in our ECE department. In this paper, we use SMaRT to run the well-known encryption algorithm, Data Encryption Standard. For information security purposes, encryption is a must in today’s sophisticated and ever-increasing computer communications such as ATM machines and SIM cards. For comparison and evaluation purposes, we also map the same algorithm on the HC12, a same-size but CISC-type off-the-shelf microcontroller, Our results show that compared to HC12, SMaRT code is only 14% longer in terms of the static number of instructions but about 10 times faster in terms of the number of clock cycles, and 7% smaller in terms of code size. Our results also show that 2.5-address instructions, a SMaRT selling point, amount to 45% of the whole R-type instructions resulting in significant improvement in static number of instructions hence code size as well as performance. Additionally, we see that the SMaRT short-branch range is sufficiently wide in 90% of cases in the SMaRT code. Our results also reveal that the SMaRT novel concept of locality of reference in using the MSBs of the registers in non-subroutine branch instructions stays valid with a remarkable hit rate of 95%!
  • Critical Success Factors Affecting In-House ERP Development
    Zainab A. AlTuraiki, King Faisal University, Saudi Arabia
    ERP Systems success and failure is important for organizations to be best deployed and to reduce failures. Despite the significant number of the adoption of ERP from vendors, there are a few cases that had ERP developed in-house while having limited signs of failure. This study aims to investigate the factors affecting the success of development of in-house ERP. The proposed model of this research was developed based on a conceptual model for software project CSFs. The researcher adopted a quantitative approach and an online questionnaire for data collection distributed to professional ERP developers. The results from this study investigated that seven factors have significant and positive relationship on the in-house ERP development success.
  • Testing and Improving Local Adaptive Importance Sampling in LJF local-JT in Multiply Sectioned Bayesian Networks
    Dan Wu and Sonia Bhatti, University of Windsor, Canada
    Multiply Sectioned Bayesian Network (MSBN) provides a model for probabilistic reasoning in multi-agent systems. The exact inference is costly and difficult to be applied in the context of MSBNs as the size of problem domain becomes larger and complex. So the approximate techniques are used as an alternative in such cases. Recently, for reasoning in MSBNs, LJF- based Local Adaptive Importance Sampler (LLAIS) has been developed for approximate reasoning in MSBNs. However, the prototype of LLAIS is tested only on Alarm Network (37 nodes). But further testing on larger networks has not been reported yet, so the scalability and reliability of algorithm remains questionable. Hence, we tested LLAIS on three large networks (treated as local JTs) namely Hailfinder (56 nodes), Win95pts (76 nodes) and PathFinder(109 nodes). From the experiments done, it is seen that LLAIS without parameters tuned shows good convergence for Hailfinder and Win95pts but not for Pathfinder network. Further when these parameters are tuned the algorithm shows considerable improvement in its accuracy and convergence for all the three networks tested.