Browsing by Author "Lent, Ricardo"
Now showing 1 - 20 of 24
- Results Per Page
- Sort Options
Item A Back Propagation Based Spiking Neural Network Approach for Intelligent Link Decisions In Satellite Communication(2021-05) Visweswaran, Meenakshi; Lent, Ricardo; Gurkan, Deniz; Abdelhadi, AhmedA Spiking Neural Network (SNN) with neuromorphic architecture for optimal link decisions is put forward in this paper. SNNs can adapt to the various changes in the working environment quickly, for maintenance or advancement of the selected performance metrics. Such results can be appealing for satellite networks with orbital operations involving either stationary or manned aids, which would provide directions for autonomy in CN decisions. The satellite on-board processing capabilities, traditionally, have been a limiting factor for advanced satellite communication strategies. Additionally, with deep space explorations rising, the demand for bandwidth is increasing, which can be achieved by making communication systems more efficient. Manual updating procedures for satellite operations gives rise to chances of configuration errors. Since AI has been showing continuous improvements and glorious performances, when applying it to convert manual operations to intelligent ones, some errors can be avoided. In scenarios where the delay time of an operator responding is considerable, the spacecraft must be able to autonomously make decisions. Intelligent systems can help improve spacecraft reliability by being trained to react to unexpected situations and guide the spacecraft to safer operational states with autonomous decision-making. This serves as an apt area to apply an SNN model for a lighter space network on a first-hop level. This literature will be focused on enabling flexible routing for link selection with the help of Spiking Neural Networks. This path selection problem is approached by applying Spike Neural Network (SNN) to classify satellite downlinks based on link cost to improve learning, and later analyze the classification and link decision capabilities of the network with respect to a traditional neural network. The spiking network has inbuilt Back Propagation (BP) implemented in the framework, Nengo. The system managed to achieve better accuracy even when activation was provided in hidden layer instead of output layers. Tweaking the firing rates, epochs and batch size of the data might yield better results. For the LEO scenario, a maximum accuracy of 86% was obtained for synthetic data using SNN and for the GEO scenario, a maximum of 98.5% was obtained.Item A Novel Consistency Metric for Web Application Performance Analysis using Geographically-Distributed Traffic Data(2020-05) Dane, Levent; Gurkan, Deniz; Subhlok, Jaspal; Gabriel, Edgar; Lent, RicardoWeb applications constitute the majority of internet traffic today. These applications have been optimized for a desired level of user experience. Optimization of web application performance requires extensive testing. A reliability guarantee and associated optimization measures pose a considerable challenge in such testing approaches especially when a worldwide access is becoming a norm. Measurement of web application performance is typically conducted through models that approximate the expected user experiences, the available service architectures, and the network related impairments. In addition, the components of such a test, namely, client and server sides with the network state in the middle, each have immense amounts of configuration variations and associated requirements for performance guarantees. In this dissertation, we tackled the analysis of web application performance through a novel measure that provides a consistency indicator. We first defined measurement and metric development methodology. We conducted our research to demonstrate what constitutes consistent performance on parameters of content length and loading time, along with other parameters specific to the application. We then applied an empirical methodology to measure the consistency of web application performance through an extensive data collection tool, NetForager. We developed this tool to collect repeatable web application performance data in the form of traffic packet captures at geographically-distributed locations around the nation. The tool uses a framework with container technologies to orchestrate isolated web application data collection. The consistency metric for a representative set of web applications has been calculated along with an error margin. The content length and delay during the retrieval of the application data has been utilized for the calculations in order to achieve a holistic performance perspective. We present consistency metric analysis for 15 web applications to reason about how optimizations and acceleration methods may provide superior application performance consistency. More importantly, our metric lays a foundation on holistic external testing of application performance that can be agnostic to variations in end user clients, application service architecture and associated servers, and the network state. Furthermore, a geographically-distributed measurement of the consistency metric provided insights into how individual session count and other application characteristics can be monitored.Item Application Agnostic Network Traffic Modeling for Realistic Traffic Generation(2020-12) Adeleke, Oluwamayowa Ade; Gurkan, Deniz; Subhlok, Jaspal; Gabriel, Edgar; Lent, RicardoResearch and testing in networking sometimes require experiments that utilize real application network traffic. However, the process for obtaining production network traffic data from industry partners for testing novel algorithms, protocols, and network functions is a significant pain point for many researchers in academia. Many industry operators are reluctant to share network traffic data with third parties to avoid violating privacy policies and avoid unintentional exposure of proprietary information to competitors. Therefore, many researchers resort to the use of synthetic traffic generators in networking experiments. Our survey of over 7000 networking research papers revealed that most research projects exclusively use constant/maximum throughput traffic generators in their evaluation experiments. These generators do not always generate traffic that is similar to real production traffic. They often blast out packets at fixed rates or rates based on statistical distributions. Existing realistic traffic generators are rarely used, and there is no standardized evaluation system for realistic traffic generators. Therefore, this work focuses on developing a new application-agnostic framework for producing abstract, high-fidelity models of application network traffic patterns for realistic application traffic generation in laboratory environments. The framework includes a comprehensive evaluation system for realistic traffic generation models. We evaluated the methods and algorithms applied in the framework, then we created and evaluated a new application traffic modeling method that combines clustering methods with stochastic modeling for realistic traffic modeling. The evaluation results reveal that traffic generated is similar to actual production traffic for many types of applications. This work's outcome is vital to researchers and industry operators in computer networking, especially those involved in large scale enterprise, data-center, and internet of things (IoT) network testing. The methods presented make it easy to investigate how various changes in a network's traffic patterns and infrastructure can impact its performance. Researchers can test new protocols and algorithms with realistic traffic derived from actual applications, without violating privacy policies or replaying extra-large traffic trace files.Item Automating Mobile Task Offloading with Agent Based Auctions(2016-12) Shah, Nidhi Niket; Lent, Ricardo; Yuan, Xiaojing; Benhaddou, DrissMobile Cloud Computing has been introduced to be a potential technology for mobile services. It solves mobile computing’s fundamental problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device i.e. by offloading them onto the cloud servers. Cloud computing allows users to use infrastructure, platforms and software in an on-demand fashion. Thus, data storage and processing services in clouds, eliminates the need for mobile users to have a powerful device configuration (e.g. CPU speed, memory capacity etc.), as all resource-intensive computing can be performed on the cloud. The data center hardware and software is referred as the cloud and the cloud can be made available “pay-as-you-use” manner. This payment for services to cloud is based on services we require for cloud to process. These services are further differentiated according to types, availability and quality. For instance, to run any particular application, one cloud provider offers less price than other cloud provider but its quality is compromised. In some cases, single cloud is not enough to meet mobile user’s demands. Therefore, new scheme is needed in which the mobile users can utilize multiple clouds in a unified fashion and get opportunity to choose the best cloud provider to serve them. The present research addresses this issue and presents design scenario for the agent based auction model where mobile users get chance to receive best services from one of the cloud providers which offers minimum cost including price and quality of the service. This quality of service refers to energy and latency optimization. The agent based framework has been designed on JADE framework, to provide robustness to the model. Concept of auction market and auction manager has been introduced in this research that helps mobile users to get energy-latency optimization with optimal price to be paid to the cloud providers for accessing their resources. The proposed mechanism has been evaluated for different scenarios and it gives optimal solution for all the cases. Thus, proposed approach can be scaled in future for more features.Item Creation of Flexible Data Structure for an Emerging Network Control Protocol(2015-05) Padmanabhi, Satyajeet; Gurkan, Deniz; Lent, Ricardo; Moges, Mequanint A.Due to increasing number of versions of OpenFlow protocol, it is getting harder day by day to use isolated data structure support of OpenFlow protocol. There is high degree of variability between each versions of OpenFlow protocol. Each version of OpenFlow specifies an interface and the collection of abstractions present in a switch that can be manipulated. So our focus of this thesis is to use the data structure (Avro) which supports OpenFlow protocol through software infrastructure proposed by Warp development group. Using this we have developed the OpenFlow version 1.2 support to Warp controller. Warp architecture uses Avro data structure which has advantages like easy integration of new version, update existing version and apply run time changes, version control, data exchange and easy schema processing which heavily impact on performance and flexibility of OpenFlow controller. These mentioned factors are compared against other OpenFlow controller architectures such as Floodlight, Ryu etc. Comparing obtained observations with different architectures conclude that Warp is more flexible architecture as compared to Floodlight and Ryu.Item Dynamic TTL Assignment in Caching Meta Algorithms and Study of the Effects on Caching In Named Data Networks(2017-12) Hariri, Meysam; Lent, Ricardo; Benhaddou, Driss; Moges, Mequanint A.NDN networks are the next generation of communication networks primarily built on the principle of content reuse. Caching in these networks are the main contributor to the performance metrics when majority of content is reusable. NDN routers use caching and eviction policies to affect the performance and control the use of limited storage in the network. This research introduced incorporation of a dynamic expiration time in the form of a TTL assigned to content to control the storage alongside the LRU eviction policy and thoroughly studied the most common caching polices, LCE and LCD, altered with TTL. The results show that the introduction of a sequence of TTL values to content objects will not increase the Hit Ratio to in the policies studied but introduces a new mechanism to control the Hit Ratio and distribute the cache hits across a network of caches and load balance the links.Item Effectiveness of Cache Pollution Attacks in ICN Cache Services(2016-12) Karimipoor, Andia; Lent, Ricardo; Yuan, Xiaojing; Benhaddou, DrissInformation-centric networking is a new technique for future Internet. The current Internet architecture was designed based on a host to host communication. In recent years there have been several efforts to replace the current IP-based Internet. The key idea of ICN is that the user will focus more on what exactly they want rather than from where to get the content. Different ICN architectures have developed. CCN (content centric networking), NDN (named data networks) and CDN (content delivery networks) are examples of ICN architectures. ICN has different structure than the host to host networks in terms of naming, routing, security and caching. All these new terms created the chance of new security threats and attacks on network. One of these security threats is possible attacks on ICN cache services. In this Master thesis, we have studied cache pollution attacks on information centric networking and investigated the network performance by comparing the normal system to a system under cache pollution attacks. Delay and path length are the parameters that we have studied in both cases. However, we defined different caching sizes and policies to see the impact of attack, on small network versus large network and later, we extended our research by studying the impact of the attack on network when we have different attack and attack detection probabilities. It intends to tackle the challenges of security concerns on ICN cache services. The evolution of new network architecture raise great challenges to study security attacks on ICN. Therefore, we implemented an ICN architecture with python and we simulated the cache pollution attacks using FIFO and LRU caching policies to analyze the effectiveness of attack on different scale networks. We designed our large network topology inspired by Gnutella's networks which are considered large peer to peer networks.Item Embedding Location-Based Network Connectivity within IPv6 Address(2014-05) Araji, Bahaa; Gurkan, Deniz; Merchant, Fatima Aziz; Lent, RicardoIPv4 (Internet Protocol Version 4) the famous 32-bit address, has been used in networks for many decades [1] and would not have sustained its usability without NAT(Network Address Translation). IPv6 (Internet Protocol version 6) with its 128-bit address, provides slight routing information [2]. In this thesis, we present ESPM (Embedding Switch ID, Port number, MAC address), Embedding Switch Identification number, Port number and MAC (Media Access Control) Address within IPv6 protocol and SDN technology, imposing a device connectivity hierarchy upon the address space. We amend the IPv6 global addressing scheme for hosts to include their MAC address as well as the switch and port numbers that they are connected to. This scheme encodes information that would ordinarily require a lookup or query packets and decrease CAM (Content Addressable Memory) table entries on the switch by forwarding the packets using the ESPM algorithm. After processing ESPM algorithm to check for OF (Open Flow) controller ID, OF switch ID, and the Port ID, the amount of total packets transferred on the network to fulfill an ICMP (Internet Control Message Protocol) request-reply process decreased by 28.1% in 1-switch-2 host. In order to demonstrate the feasibility of such an addressing scheme, we use POF (Protocol Oblivious Forwarding) controller and POF switch [3] to implement ESPM and then measure the impact on the number of network management packets transferred between hosts during connectivity tests.Item Energy Aware Routing of Web Requests in Hybrid Cloud(2017) Velusamy, Gandhimathi; Lent, Ricardo; Subhlok, JaspalNormally the application services are deployed as web services on the cloud for scalability and fault tolerance. We propose to use an autonomous intelligent global load balancer (IGLB) to distribute the requests across redundant clusters in an energy efficient way without compromising quality of service.Item Energy-Delay Aware Web Request Routing Using Learning Automata(2018-12) Velusamy, Gandhimathi 1970-; Subhlok, Jaspal; Lent, Ricardo; Gnawali, Omprakash; Gabriel, EdgarThe ever-increasing dependency on the Internet in our day-to-day life and the pay-as-you-go model of the cloud computing causes an extensive number of applications to be deployed as web services. The web services are normally deployed on clusters of redundant servers replicated across different geographical locations to provide reliable and better-quality services. Usually, a front-end server receives the requests from clients and distributes to the redundant servers based on load balancing policies. The explosion of web services and the replication causes a massive number of servers to be run from data centers. These servers consume enormous electricity and become a concern for data-center owners with increased electricity bills. The U.S. electricity market possesses spatio-temporal variations in electricity prices. Normally, the requests are served from the nearest servers to the clients. However, this will increase the load on the data centers in more populated areas. Moreover, the electricity rates at the nearest locations may be higher. In this scenario, by making the front-end servers route the requests to the back-end servers based on the electricity prices, the cost of delivering the web services for a data-center owner could be controlled. However, if we try to optimize the energy cost by serving a request from a location where the electricity cost is cheaper, it may increase the delay in receiving the response based on the distance between the server and client, state of the network and the server. In certain applications, the increased latency in receiving the responses may lead to revenue loss if any dissatisfied customer revokes his subscription. So, reducing the energy cost without increasing the latency is a great challenge in web-based service delivery. In this dissertation, we propose a solution to reduce the electricity costs for data-center owners and to serve the requests with reduced latency. We propose an online learning automata based request routing algorithm to be run from the front-end servers to select back-end servers with energy-delay awareness. Our experiments on a cloud testbed with real-time workload have proved with better performance in both electricity cost and delay compared to the existing methods.Item Evaluating the Performance of CGR Routing Strategies in Delay Tolerant Networks using DtnSim(2018-12) Loganathan, Shrilekha; Lent, Ricardo; Benhaddou, Driss; Sugawara, Junko; Shireen, WajihaDelay Tolerant Networking (DTN) networks have no continuous end-to-end connectivity and data undergoes store-and-forward mechanism of custody transfer where the bundles (data) are taken custody on a node-to-node basis. Nodes therefore deploy complex routing algorithms such as Contact Graph Routing (CGR) that predicts and generates time evolving graphs of the contacts for path computation. Different path computation techniques have been proposed for route list computations on DTN networks. Path computation techniques involve Dijkstra’s search, Yen’s model of K-Shortest paths [1] etc., with the metrics being the best delivery time (BDT), number of hops visited as per the ION implementation (by Jet Propulsion Laboratory, NASA). Recently, several volume-aware based route computation techniques have been proposed, which consider the volume consumption for delivery of a bundle and updates the residual volume for the contact/route while forwarding. In this research, we consider two contact plans for experimental analysis: a simple contact plan graph and a much denser graph of a deep space scenario. All required parameters for route computations are made available through the contact plan files. Contact Plan Designer is used to emulate deep space satellite networks and to design a feasible contact plan. DtnSim is the simulator used to simulate DTN network experiments for various volume aware route computations. DtnSim uses CGR implemented in ION 3.5 and facilitates additional models of routing currently proposed for further research. The experimental performance analysis includes studying throughput (bundles delivered, mean hops traversed, delivery ratio) and overhead metrics (number of Dijkstra searches, route table size, routes explored during forwarding, Shared Data Recorder (SDR) storage) on routing of bundle traffic using different types of volume aware routing techniques such as 1st contact volume aware, all contacts volume aware, and all contacts volume aware with source routing in extension block or header for various routing algorithms: 1) initial+anchor, 2) first-ending, 3) first-depleted, 4) one-best-path, and 5) per-neighbor-best-path. The effect of topology and contacts on routing performance, the wastage of route computations involved during enqueuing for packet losses during forwarding are not studied extensively. Therefore, in this work we studied and compared the above algorithms for deep space and essentially formulated important aspects, challenges, and limitations while designing a CGR routing algorithm for deep space.Item Evaluation of Multiple Controller Software Defined Networks(2016-12) Harsh, Sonal; Lent, Ricardo; Benhaddou, Driss; Yuan, XiaojingSoftware Defined Networks are currently the most promising researched area in the field of computer networking. Many research implementation is undergoing in making advances in the field of Software Defined networks. The rise of demand for moving from traditional networks to SDNs has given rise to many challenges. Software-defined Networks are continuously advancing, which demands the need to solve the issues of scalability, transmission delays, and packet loss. The larger the network, more the delay as with the increase in the network causes network congestion and transmission delays. In larger networks, single controller architecture will be inefficient to manage the network. To tackle this issue, Software Defined Architecture with multiple controllers has been introduced in the recent research. Multiple controller SDN architectures will be efficient to manage larger networks, decrease transmission loss and avoid fault tolerance. As the network expands the load on the single controller increases, so it becomes difficult for single controller architecture to efficiently manage the network. Multiple controllers do not help to balance the load between them, it will tackle the network congestion issue by distributing the load between them. In this paper, we evaluate the performance of Multiple controller SDN architectures in comparison to single controller SDN architecture. The Multiple controller architectures have better load management, less transmission delay, and less packet loss ratio. The multicontroller SDN architecture is a more efficient solution than single controller architecture when it comes to wider networks. In the experiment, we have used POX SDN controller to implement the Software Defined Architecture, mininet[2] as a network emulator to test complex topologies and D-ITG to test the result of intense network traffic on topologies in different SDN architectures.Item Evaluation study of virtual network embedding for short-lived virtual networks(2015-12) Palagummi, Mydhili; Lent, Ricardo; Moges, Mequanint A.; Yuan, Xiaojing; Lent, RicardoCloud services are currently being extensively used for server hosting, data storage, scientific and research purposes. Virtualization technology is an essential element for these services. Virtualization enables the creation of multiple virtual instances on physical or hardware infrastructure. Network virtualization is a recent advancement in this field through which virtual networks can be created over real physical networks (also called as substrate networks). Such virtual networks facilitate testing and quick deployment of new technologies, better utilization of hardware and provide more flexibility to users. A crucial element of network virtualization is the stage in which the virtual networks are created on the substrate network. This process is of critical nature as the number of virtual networks that are created on the substrate are high. Hence the placement of these virtual networks needs to be done in a strategic way. The creation of virtual networks on a substrate network is referred to as Virtual network embedding. Determining the best way to place or create multiple virtual networks on a substrate network while satisfying a given set of constraints is referred to as Virtual network embedding problem. The technique or algorithms used to solve this problem are known as virtual network embedding algorithms. In this thesis we evaluate and compare six virtual network embedding algorithms for embedding short-lived virtual networks on substrate networks with fat-tree topology and UUNET topology. We discuss different metrics to evaluate the performance of embedding algorithms and compare the algorithms based on these metrics. In particular, we examine the probability of success of embedding a virtual network, average substrate path length and the distribution pattern of virtual networks in the substrate network for six different algorithms. The aim of this thesis work is to compare the performance of virtual network embedding algorithms, observe the nuances of the approaches that contribute to optimal results and investigate the embedding for the case of short-lived virtual networks.Item Experimental Evaluation of Convergence Layers in Delay Tolerant Networks(2018-05) Nayak, Sukanti; Lent, Ricardo; Benhaddou, Driss; Yuan, XiaojingTraditional Internet protocols like TCP/IP and UDP/IP are not designed to perform efficiently in Delay Tolerant Networks (DTN) environments that operate in environments with high delays, significant losses, and intermittent connectivity. DTN uses the Bundle Protocol (BP) in the Bundle layer for storing and forwarding data units to deal with the absence of an end to end connection. Since BP is an overlay protocol that can be used above different transport layer protocols, the corresponding convergence layer protocol needs to be used to allow data flow from BP to the transport protocol and vice versa. The convergence layer enhances the underlying transport protocol by adding services like reliable delivery and message boundaries and some additional functionality to the transport layer protocol that makes it suitable to function in extreme environments. This Thesis work focuses on the performance evaluation of the Licklider Transmission Protocol (LTP) convergence layer running over Datagram Congestion Control Protocol (DCCP) under various realistic DTN conditions. The performance comparison of LTPCL/DCCP with other convergence Layers such as LTPCL/UDP and TCPCL/TCP is also performed. Tests are conducted enabling communication between Nodes using LTP over UDP and LTP over DCCP and TCPCL over TCP protocol. The discussion of different metrics, such as Throughput and File Delivery Time (FDT) of these convergence layer protocols under various Bit Error Rates (BER), propagation delays and link disruption are also examined.Item Graph-based, Policy-driven Resource Mapping for Precise Allocations on Diverse Computer Networks(2022-07-31) Baxley, Stuart; Gurkan, Deniz; Subhlok, Jaspal; Johnsson, Lennart; Lent, RicardoDistributed systems encompass a wide variety of compute platforms, serving various computing industries. Shared infrastructure systems, including HPC, cloud, and testbeds, provide remote access to network and compute hardware to facilitate web services, compute jobs, research, and a number of other services. Allocation systems perform the mapping of resources between customer specifications and available hardware. Typically, these allocation systems are tailor-built for a particular system or environment with a focus on mapping compute resource. Our research determined a lack of existing allocation system able to represent any networked system and to consider the network resources at the same priority as compute. This research proposes a flexible, graph-based resource description data structure able to express diversity in network topology and device composition. Given this new structure, we designed and implemented two solvers for finding resource mappings between the request and infrastructure networks. Lastly, we implemented eight allocation policies able to consider both requester and infrastructure provider requirements to select optimal network allocations. The achieved allocation system is evaluated through simulation and we provide detailed discussion of the results.Item Interoperability in Smartgrids Using OPC Unified Architecture(2016-12) Chaudhary, Tushar; Benhaddou, Driss; Lent, Ricardo; Abolhassani, Mehdi T.Powergrids are the largest networks on the planet. But they are quickly becoming fractured and impervious to data analysis. Smartgrids are the next logical iteration they are moving towards. Microgrid architecture proposes smartgrids in their distributed form. All these iterations of next generation networks demand seamless interoperability. This work intends to find appropriate standards that allow homogenous integration of conventional and next generation grids. Literature review is done to find existing solutions and possible research leads. OPC Unified Architecture protocol specifications that allow inter-operation of automation systems were discussed in depth to make a case for its utility. Finally, a testbed using OPC enabled server-client architecture on a virtual smartgrid was created from scratch. Performance evaluation of proposed interoperability solution using OPC UA was then carried out to verify the merit.Item MEG-Based Functional Connectivity Biomarkers of Dyslexia(2014-12) Iraola Goiburu, Inigo; Zouridakis, George; Malki, Heidar A.; Lent, RicardoDyslexia a learning disability related to reading, often characterized by difficulty with accurate word recognition, decoding, and spelling. The disorder affects approximately 10% of the population and it is typically diagnosed using neuropsychological evaluation. The main objective of this thesis has been the development of unique measures based on fast neurophysiological recordings that may used to improve detection and allow intervention at an earlier age, with improved outcomes. We used functional connectivity analysis to identify brain connectivity networks in task-free, resting-state Magnetoencephalographic recordings of brain activity obtained in two groups of participants, namely 21 dyslexia patients and 20 age-matched normal controls. In an attempt to quantify interaction among brain regions and understand how brain networks are affected by dyslexia, we used Granger causality, which can estimate cause-and-effect relationships both in terms of strength and direction. A Granger connectivity matrix was computed for each subject individually, and then group templates were estimated by averaging all matrices in each group. Furthermore, we performed classification of the subjects using support vector machines and Fisher's criterion to rank the features and identify the best subset for maximum separation of the two groups. Our results show that a combined model based on connectivity matrices and graph theory measures can provide 100% classification accuracy in separating the two groups, with 100% sensitivity and specificity. These findings suggest that analysis of functional connectivity patterns may provide a valuable tool for the early detection of dyslexia.Item OpenFlow-based Distributed and Fault-Tolerant Software Switch Architecture(2014-05) Velusamy, Gandhimathi; Gurkan, Deniz; Merchant, Fatima Aziz; Lent, RicardoWe are living in the era where each of us is connected with each other virtually across the globe. We are sharing the information electronically over the internet every second of our day. There are many networking devices involved in sending the information over the internet. They are routers, gateways, switches, PCs, laptops, handheld devices, etc. The switches are very crucial elements in delivering packets to the intended recipients. Now the networking field is moving towards Software Defined Networking and the network elements are being slowly replaced by the software applications run by OpenFlow protocols. For example the switching functionality in local area networks could be achieved with software switches like OpenvSwitch (OVS), LINC-Switch, etc. Now a days the organizations depend on the datacenters to run their services. The application servers are being run from virtual machines on the hosts to better utilize the computing resources and make the system more scalable. The application servers need to be continuously available to run the business for which they are deployed for. Software switches are used to connect virtual machines as an alternative to Top of Rack switches. If such software switch fails then the application servers will not be able to connect to its clients. This may severely impact the business serviced by the application servers, deployed on the virtual machines. For reliable data connectivity, the switching elements need to be continuously functional. There is a need for reliable and robust switches to cater the today's networking infrastructure. In this study, the software switch LINC-Switch is implemented as distributed application on multiple nodes to make it resilient to failure. The fault-tolerance is achieved by using the distribution properties of the programming language Erlang. By implementing the switch on three redundant nodes and starting the application as a distributed application, the switch will be serving its purpose very promptly by restarting it on other node in case it fails on the current node by using failover/takeover mechanisms of Erlang. The tolerance to failure of the LINC-Switch is verified with Ping based experiment on the GENI test bed and on the Xen-cluster in our Lab.Item Policies for Upstream Web Server Selection Based on Energy Efficiency and Quality of Service(2016-05) Shahane, Sachin; Lent, Ricardo; Moges, Mequanint A.; Yuan, XiaojingWeb servers are a very important tool when providing users with requested content on the Internet. Usage of the Internet is growing day-by-day, making those software applications essential. The use of Web servers is growing tremendously, but their performance and reliability haven’t been improved at the same rate. The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. Due to the environmental and economic concerns energy consumption in web services infrastructure has become a major topic of research. So, for the purpose of energy conservation there is a need of an efficient redesign of policies, algorithms and mechanisms. In this Master thesis, we propose policies for web server network in order to achieve energy efficiency. Policy-based management has emerged as a promising solution for the management of large-scale web networks. The fundamental advantage of a policy-based framework is that it allows a machine-independent scheme for managing multiple devices from a single point of control. This thesis intends to tackle the challenges of both reducing the energy consumption and maintaining Quality of Service by reducing the delay. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy efficiency. In this thesis, we propose four novel policies to implement on upstream web server which will manage other web servers in the network. RUBiS online auction benchmark has been used for generating workload in our experiment. In our experiment we did evaluation of HAProxy consisting of various algorithms and compared results with policy implemented results. We found significant energy reduction in the upstream web server network while satisfying Quality of Service (QoS) requirements.Item Policy Carry-Over for Mobility in Software Defined Networks(2014-12) Vemuri, Kiran Kameswari; Gurkan, Deniz; Merchant, Fatima Aziz; Lent, RicardoDue to increase in the number of mobile devices that are connected to a network, it is getting harder by the day to manage these devices while they hop between different networks. Network management often involves implementation of policies that are generic to all the devices connected to the network as well as the ones that are specific to individual devices. In order to support this type of network management, static policies have to be set up across the networks using middle box technologies like firewalls, network policy servers, authentication servers etc. These middle boxes control the activity of the devices connected to the network using the policies but are often home for a huge number policies that complicate the network setup and make policy management a herculean task. In recent years, the development of Software Defined Networking practices has made it simple and intuitive to instantiate programmable networks and automate different network functions. We propose a solution to automate the process of network policy management for mobility in a Software Defined Network. Using this approach, we can implement a solution that can dynamically carry the policies across the network as a host or device moves from one place to another in a network, without any action from the network administrator. Also, with the help of this implementation, we can automate the process of policy repair in the case of network changes or errors