Thursday, 12 July 2012

Low-Complexity Image Processing for Real-Time Detection of Neonatal Clonic Seizures


ABSTRACT:-

In this paper, we consider a novel low-complexity real-time image-processing-based approach to the detection of neonatal clonic seizures. Our approach is based on the extraction, from a video of a newborn, of an average luminance signal representative of the body movements. Since clonic seizures are characterized by periodic movements of parts of the body (e.g., the limbs), by evaluating the periodicity of the extracted average luminance signal it is possible to detect the presence of a clonic seizure. The periodicity is investigated, through a hybrid autocorrelation-Yin estimation technique, on a per-window basis, where a time window is defined as a sequence of consecutive video frames. While processing is first carried out on a single window basis, we extend our approach to interlaced windows. The performance of the proposed detection algorithm is investigated, in terms of sensitivity and specificity, through receiver operating characteristic curves, considering video recordings of newborns affected by neonatal seizures.

Keywords:- IEEE Project 2012 Titles, Networking titles 2012, Image Processing Titles 2012, Matlab Titles 2012

ISP An Optimal Out-Of-Core Image-Set Processing Streaming Architecture for Parallel Heterogeneous Systems



ABSTRACT:-

Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.

keywords: IEEE Project Title 2012, Image Processing Title 2012, Networking Title 2012

Hyperconnections and Hierarchical Representations for Grayscale and Multiband Image Processing


ABSTRACT:-

Connections in image processing are an important notion that describes how pixels can be grouped together according to their spatial relationships and/or their gray-level values. In recent years, several works were devoted to the development of new theories of connections among which hyperconnection (h-connection) is a very promising notion. This paper addresses two major issues of this theory. First, we propose a new axiomatic that ensures that every h-connection generates decompositions that are consistent for image processing and, more precisely, for the design of h-connected filters. Second, we develop a general framework to represent the decomposition of an image into h-connections as a tree that corresponds to the generalization of the connected component tree. Such trees are indeed an efficient and intuitive way to design attribute filters or to perform detection tasks based on qualitative or quantitative attributes. These theoretical developments are applied to a particular fuzzy h-connection, and we test this new framework on several classical applications in image processing, i.e., segmentation, connected filtering, and document image binarization. The experiments confirm the suitability of the proposed approach: It is robust to noise, and it provides an efficient framework to design selective filters.

Keywords: IEEE Project Title 2012, Image Processing Title 2012, Networking Title 2012.

Classification of Dielectric Barrier Discharges Using Digital Image Processing Technology



ABSTRACT:-

A digital image processing technology-gray level histogram obtained by image processing of the discharge-is proposed to classify the two kinds of dielectric barrier discharge (DBD) modes. With an increase of the applied voltage, frequency, or exposure time, the kurtosis and the skewness of the gray levels decrease, the standard deviation of the gray levels increases significantly in the case of filamentary mode, in contradistinction to the case of homogeneous mode, where its kurtosis, skewness, and standard deviation remain almost constant. The majority of the pixels correspond to near-zero gray levels in the case of the filamentary mode and have a larger gray level for the homogeneous mode. With a decrease of the pressure, the mean gray level in the homogeneous mode increases significantly when the voltage is kept fixed. This suggests that the onset voltage in higher pressure is larger than that in lower pressure. The mean gray level in Ar is larger than that in He when the voltage is kept fixed. It may mean that the onset voltage in He is larger than that in Ar. These results indicate that the method is not only effective but also simple to classify the DBD modes.

Keywords:- IEEE Project Title 2012, Image Processing Title 2012, Networking Title 2012.

Automated Multiscale Morphometry of Muscle Disease From Second Harmonic Generation Microscopy Using Tensor-Based Image Processing



ABSTRACT:-

Practically, all chronic diseases are characterized by tissue remodeling that alters organ and cellular function through changes to normal organ architecture. Some morphometric alterations become irreversible and account for disease progression even on cellular levels. Early diagnostics to categorize tissue alterations, as well as monitoring progression or remission of disturbed cytoarchitecture upon treatment in the same individual, are a new emerging field. They strongly challenge spatial resolution and require advanced imaging techniques and strategies for detecting morphological changes. We use a combined second harmonic generation (SHG) microscopy and automated image processing approach to quantify morphology in an animal model of inherited Duchenne muscular dystrophy ( mdx mouse) with age. Multiphoton XYZ image stacks from tissue slices reveal vast morphological deviation in muscles from old mdx mice at different scales of cytoskeleton architecture: cell calibers are irregular, myofibrils within cells are twisted, and sarcomere lattice disruptions (detected as “verniers”) are larger in number compared to samples from healthy mice. In young mdx mice, such alterations are only minor. The boundary-tensor approach, adapted and optimized for SHG data, is a suitable approach to allow quick quantitative morphometry in whole tissue slices. The overall detection performance of the automated algorithm compares very well with manual “ by eye” detection, the latter being time consuming and prone to subjective errors. Our algorithm outperfoms manual detection by time with similar reliability. This approach will be an important prerequisite for the implementation of a clinical image databases to diagnose and monitor specific morphological alterations in chronic (muscle) diseases.

Keywords: IEEE Project Title 2012, Image  processing Title 2012, Cloud Computing 2012.

A Generalized Logarithmic Image Processing Model Based on the Gigavision Sensor Model


The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.
Keywords: IEEE Project title 2012, Image Processing 2012, Networking Tilte 2012.



A Complete Processing Chain for Shadow Detection and Reconstruction in VHR Images



ABSTRACT
The presence of shadows in very high resolution (VHR) images can represent a serious obstacle for their full exploitation. This paper proposes to face this problem as a whole through the proposal of a complete processing chain, which relies on various advanced image processing and pattern recognition tools. The first key point of the chain is that shadow areas are not only detected but also classified to allow their customized compensation. The detection and classification tasks are implemented by means of the state-of-the-art support vector machine approach. A quality check mechanism is integrated in order to reduce subsequent misreconstruction problems. The reconstruction is based on a linear regression method to compensate shadow regions by adjusting the intensities of the shaded pixels according to the statistical characteristics of the corresponding nonshadow regions. Moreover, borders are explicitly handled by making use of adaptive morphological filters and linear interpolation for the prevention of possible border artifacts in the reconstructed image. Experimental results obtained on three VHR images representing different shadow conditions are reported, discussed, and compared with two other reconstruction techniques.

Keywords:- IEEE Project Title 2012, Matlab Title, Image Processing Title, Data Mining Title.

Saturday, 7 July 2012

Cloud Computing Security: From Single to Multi-clouds



ABSTRACT:-

The use of cloud computing has increased rapidly in many organizations. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring the security of cloud computing is a major factor in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers may be untrusted. Dealing with "single cloud" providers is predicted to become less popular with customers due to risks of service availability failure and the possibility of malicious insiders in the single cloud. A movement towards "multi-clouds", or in other words, "interclouds" or "cloud-of-clouds" has emerged recently. This paper surveys recent research related to single and multi-cloud security and addresses possible solutions. It is found that the research into the use of multi-cloud providers to maintain security has received less attention from the research community than has the use of single clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.

Keywords:- IEEE Project Titles 2012, Project Titles 2012, Cloud Computing Titles, Wireless Communication Titles, Networki 

Load-Balancing Multipath Switching System with Flow Slice



ABSTRACT:-
Multipath Switching systems (MPS) are intensely used in state-of-the-art core routers to provide terabit or even petabit switching capacity. One of the most intractable issues in designing MPS is how to load balance traffic across its multiple paths while not disturbing the intraflow packet orders. Previous packet-based solutions either suffer from delay penalties or lead to O(N2 ) hardware complexity, hence do not scale. Flow-based hashing algorithms also perform badly due to the heavy-tailed flow-size distribution. In this paper, we develop a novel scheme, namely, Flow Slice (FS) that cuts off each flow into flow slices at every intraflow interval larger than a slicing threshold and balances the load on a finer granularity. Based on the studies of tens of real Internet traces, we show that setting a slicing threshold of 1-4 ms, the FS scheme achieves comparative load-balancing performance to the optimal one. It also limits the probability of out-of-order packets to a negligible level (10-6) on three popular MPSes at the cost of little hardware complexity and an internal speedup up to two. These results are proven by theoretical analyses and also validated through trace-driven prototype simulations.

 Keywords:- IEEE Project Titles 2012, Networking Titles, Cloud Communication  Titles, Secure  Computing Titles, Networking Titles.

A New Cell Counting Based Attack Against Tor.



Various low-latency anonymous communication systems such as Tor and Anonymizer have been designed to provide anonymity service for users. In order to hide the communication of users, most of the anonymity systems pack the application data into equal-sized cells (e.g., 512 B for Tor, a known real-world, circuit-based, low-latency anonymous communication network). Via extensive experiments on Tor, we found that the size of IP packets in the Tor network can be very dynamic because a cell is an application concept and the IP layer may repack cells. Based on this finding, we investigate a new cell-counting-based attack against Tor, which allows the attacker to confirm anonymous communication relationship among users very quickly. In this attack, by marginally varying the number of cells in the target traffic at the malicious exit onion router, the attacker can embed a secret signal into the variation of cell counter of the target traffic. The embedded signal will be carried along with the target traffic and arrive at the malicious entry onion router. Then, an accomplice of the attacker at the malicious entry onion router will detect the embedded signal based on the received cells and confirm the communication relationship among users. We have implemented this attack against Tor, and our experimental data validate its feasibility and effectiveness. There are several unique features of this attack. First, this attack is highly efficient and can confirm very short communication sessions with only tens of cells. Second, this attack is effective, and its detection rate approaches 100% with a very low false positive rate. Third, it is possible to implement the attack in a way that appears to be very difficult for honest participants to detect (e.g., using our hopping-based signal embedding).
Keywords:- IEEE Project Titles 2012, Networking Titles, Wire Less Networking Titles, Cloud Communication titles, Secure Communication Title.

View-invariant action recognition based on Artificial Neural Networks.



In this paper, a novel view invariant action recognition method based on neural network representation and recognition is proposed. The novel representation of action videos is based on learning spatially related human body posture prototypes using self organizing maps. Fuzzy distances from human body posture prototypes are used to produce a time invariant action representation. Multilayer perceptrons are used for action classification. The algorithm is trained using data from a multi-camera setup. An arbitrary number of cameras can be used in order to recognize actions using a Bayesian framework. The proposed method can also be applied to videos depicting interactions between humans, without any modification. The use of information captured from different viewing angles leads to high classification performance. The proposed method is the first one that has been tested in challenging experimental setups, a fact that denotes its effectiveness to deal with most of the open issues in action recognition.
Keywords: IEEE Project Title 2012, Networking Title, Cloud Computing Title, Wireless Communication Title.  

Automatic Reconfiguration for Large-Scale Reliable Storage Systems



ABSTRACT:-

Byzantine-fault-tolerant replication enhances the availability and reliability of Internet services that store critical state and preserve it despite attacks or software errors. However, existing Byzantine-fault-tolerant storage systems either assume a static set of replicas, or have limitations in how they handle reconfigurations (e.g., in terms of the scalability of the solutions or the consistency levels they provide). This can be problematic in long-lived, large-scale systems where system membership is likely to change during the system lifetime. In this paper, we present a complete solution for dynamically changing system membership in a large-scale Byzantine-fault-tolerant system. We present a service that tracks system membership and periodically notifies other system nodes of membership changes. The membership service runs mostly automatically, to avoid human configuration errors; is itself Byzantine-fault-tolerant and reconfigurable; and provides applications with a sequence of consistent views of the system membership. We demonstrate the utility of this membership service by using it in a novel distributed hash table called dBQS that provides atomic semantics even across changes in replica sets. dBQS is interesting in its own right because its storage algorithms extend existing Byzantine quorum protocols to handle changes in the replica set, and because it differs from previous DHTs by providing Byzantine fault tolerance and offering strong semantics. We implemented the membership service and dBQS. Our results show that the approach works well, in practice: the membership service is able to manage a large system and the cost to change the system membership is low.

Keywords:- IEEE Project Titles 2012, Secure Communication Titles, Cloud Computing Titles, Networking Titles.


Design and Implementation of TARF: A Trust-Aware Routing Framework for WSNs


The multihop routing in wireless sensor networks (WSNs) offers little protection against identity deception through replaying routing information. An adversary can exploit this defect to launch various harmful or even devastating attacks against the routing protocols, including sinkhole attacks, wormhole attacks, and Sybil attacks. The situation is further aggravated by mobile and harsh network conditions. Traditional cryptographic techniques or efforts at developing trust-aware routing protocols do not effectively address this severe problem. To secure the WSNs against adversaries misdirecting the multihop routing, we have designed and implemented TARF, a robust trust-aware routing framework for dynamic WSNs. Without tight time synchronization or known geographic information, TARF provides trustworthy and energy-efficient route. Most importantly, TARF proves effective against those harmful attacks developed out of identity deception; the resilience of TARF is verified through extensive evaluation with both simulation and empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network conditions. Further, we have implemented a low-overhead TARF module in TinyOS; as demonstrated, this implementation can be incorporated into existing routing protocols with the least effort. Based on TARF, we also demonstrated a proof-of-concept mobile target detection application that functions well against an antidetection mechanism.
Keywords:- Secure Communication Titles, IEEE Project Titles 2012, Mobile Communication Titles, Wire less Communication Titles.

Cooperative Download in Vehicular Environments



We consider a complex (i.e., nonlinear) road scenario where users aboard vehicles equipped with communication interfaces are interested in downloading large files from road-side Access Points (APs). We investigate the possibility of exploiting opportunistic encounters among mobile nodes so to augment the transfer rate experienced by vehicular downloaders. To that end, we devise solutions for the selection of carriers and data chunks at the APs, and evaluate them in real-world road topologies, under different AP deployment strategies. Through extensive simulations, we show that carry&forward transfers can significantly increase the download rate of vehicular users in urban/suburban environments, and that such a result holds throughout diverse mobility scenarios, AP placements and network loads.
 Keywords: IEEE Project Titles 2012, Mobile Computing Titles, Wireless Communication Titles, Networking Titles.

Protecting Location Privacy in Sensor Networks Against a Global Eavesdropper


While many protocols for sensor network security provide confidentiality for the content of messages, contextual information usually remains exposed. Such contextual information can be exploited by an adversary to derive sensitive information such as the locations of monitored objects and data sinks in the field. Attacks on these components can significantly undermine any network application. Existing techniques defend the leakage of location information from a limited adversary who can only observe network traffic in a small region. However, a stronger adversary, the global eavesdropper, is realistic and can defeat these existing techniques. This paper first formalizes the location privacy issues in sensor networks under this strong adversary model and computes a lower bound on the communication overhead needed for achieving a given level of location privacy. The paper then proposes two techniques to provide location privacy to monitored objects (source-location privacy)-periodic collection and source simulation-and two techniques to provide location privacy to data sinks (sink-location privacy)-sink simulation and backbone flooding. These techniques provide trade-offs between privacy, communication cost, and latency. Through analysis and simulation, we demonstrate that the proposed techniques are efficient and effective for source and sink-location privacy in sensor networks.

Keywords: IEEE Project Titles 2012, Mobile Computing Titles, Wireless Communication Titles.

Distributed Throughput Maximization in Wireless Networks via Random Power Allocation



We develop a distributed throughput-optimal power allocation algorithm in wireless networks. The study of this problem has been limited due to the nonconvexity of the underlying optimization problems that prohibits an efficient solution even in a centralized setting. By generalizing the randomization framework originally proposed for input queued switches to SINR rate-based interference model, we characterize the throughput-optimality conditions that enable efficient and distributed implementation. Using gossiping algorithm, we develop a distributed power allocation algorithm that satisfies the optimality conditions, thereby achieving (nearly) 100 percent throughput. We illustrate the performance of our power allocation solution through numerical simulation.

 Keywords: IEEE Project Titles, Mobile Computing Titles, Cloud Computing Titles, Wire Less Communication Titles.

Network Assisted Mobile Computing with Optimal Uplink Query Processing



Many mobile applications retrieve content from remote servers via user generated queries. Processing these queries is often needed before the desired content can be identified. Processing the request on the mobile devices can quickly sap the limited battery resources. Conversely, processing user-queries at remote servers can have slow response times due communication latency incurred during transmission of the potentially large query. We evaluate a network-assisted mobile computing scenario where mid-network nodes with "leasing" capabilities are deployed by a service provider. Leasing computation power can reduce battery usage on the mobile devices and improve response times. However, borrowing processing power from mid-network nodes comes at a leasing cost which must be accounted for when making the decision of where processing should occur. We study the tradeoff between battery usage, processing and transmission latency, and mid-network leasing. We use the dynamic programming framework to solve for the optimal processing policies that suggest the amount of processing to be done at each mid-network node in order to minimize the processing and communication latency and processing costs. Through numerical studies, we examine the properties of the optimal processing policy and the core tradeoffs in such systems.
Keywords: IEEE Project Titles 2012, Mobile Computing Titles, Wireless Communication Titles, Networking Titles, Cloud Computing Titles.

Friday, 6 July 2012

AMPLE An Adaptive Traffic Engineering System Based on Virtual Routing Topologies


Handling traffic dynamics in order to avoid network congestion and subsequent service disruptions is one of the key tasks performed by contemporary network management systems. Given the simple but rigid routing and forwarding functionalities in IP base environments, efficient resource management and control solutions against dynamic traffic conditions is still yet to be obtained. In this article, we introduce AMPLE - an efficient traffic engineering and management system that performs adaptive traffic control by using multiple virtualized routing topologies. The proposed system consists of two complementary components: offline link weight optimization that takes as input the physical network topology and tries to produce maximum routing path diversity across multiple virtual routing topologies for long term operation through the optimized setting of link weights. Based on these diverse paths, adaptive traffic control performs intelligent traffic splitting across individual routing topologies in reaction to the monitored network dynamics at short timescale. According to our evaluation with real network topologies and traffic traces, the proposed system is able to cope almost optimally with unpredicted traffic dynamics and, as such, it constitutes a new proposal for achieving better quality of service and overall network performance in IP networks.
Keywords:- IEEE Project Titles 2012, Wireless Communication Titles, Data Mining Titles, Networking Titles, Mobiles Computing.

Cooperative Data Dissemination via Roadside WLANs.



Data dissemination services embrace a wide variety of telematic applications where data packets are generated at a remote server in the Internet and destined to a group of nomadic users such as vehicle passengers and pedestrians. The quality of a data dissemination service is highly dependent on the availability of network infrastructures in terms of the access points. In this article, we investigate the utilization of roadside wireless local area networks (RS-WLANs) as a network infrastructure for data dissemination. A two-level cooperative data dissemination approach is presented. With the network-level cooperation, the resources in the RS-WLANs are used to facilitate the data dissemination services for the nomadic users. The packet-level cooperation is exploited to improve the packet transmission rate to a nomadic user. Various techniques for the two levels of cooperation are discussed. A case study is presented to evaluate the performance of the data dissemination approach.
Keywords:- IEEE Project Titles 2012, Wireless Communication Titles, Cloud, Computing Titles, Networking Titles.





Topology Control in Mobile Ad Hoc Networks with Cooperative Communications.


Cooperative communication has received tremendous interest for wireless networks. Most existing works on cooperative communications are focused on link-level physical layer issues. Consequently, the impacts of cooperative communications on network-level upper layer issues, such as topology control, routing and network capacity, are largely ignored. In this article, we propose a Capacity-Optimized Cooperative (COCO) topology control scheme to improve the network capacity in MANETs by jointly considering both upper layer network capacity and physical layer cooperative communications. Through simulations, we show that physical layer cooperative communications have significant impacts on the network capacity, and the proposed topology control scheme can substantially improve the network capacity in MANETs with cooperative communications.
keywords: IEEE Project Titles 2012, Wireless Communication, Data Mining Titles, Cloud Computing Titles.

Bridging Social and Data Networks


Social networking applications have emerged as the platform of choice for carrying out a number of different activities online. In addition to their primary target of social interaction, we now also employ such applications to search for information online or to share multimedia content with our friends and families. For instance, according to recent statistics, each of us spends on average 15 min on YouTube every day.
 Keywords: IEEE Project Titles 2012, Data Mining Titles, Wireless Communication Titles , Networking Titles,Cloud communication Titles.  

Discovering Characterizations of the Behavior of Anomalous Sub-populations.



We consider the problem of discovering attributes, or properties, accounting for the a-priori stated abnormality of a group of anomalous individuals (the outliers) with respect to an overall given population (the inliers). To this aim, we introduce the notion of exceptional property and define the concept of exceptionality score, which measures the significance of a property. In particular, in order to single out exceptional properties, we resort to a form of minimum distance estimation for evaluating the badness of fit of the values assumed by the outliers compared to the probability distribution associated with the values assumed by the inliers. Suitable exceptionality scores are introduced for both numeric and categorical attributes. These scores are, both from the analytical and the empirical point of view, designed to be effective for small samples, as it is the case for outliers. We present an algorithm, called EXPREX, for efficiently discovering exceptional properties. The algorithm is able to reduce the needed computational effort by exploring only relevant numerical intervals and by exploiting suitable pruning rules. The experimental results confirm that our technique is able to provide knowledge characterizing outliers in a natural manner.
Keywords:-  IEEE Project Titles 2012, Data Mining Titles, Cloud Computing Titles, Networking Titles.

Outsourced Similarity Search on Metric Data Assets




This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries.

Keywords:- IEEE Project 2012, Data Mining Titles, Cloud Computing Titles, Networking Titles, 

SCALABLE LEARNING OF COLLECTIVE BEHAVIOR


This study of collective behavior is to understand how individuals behave in a social networking environment. Oceans of data generated by social media like Facebook, Twitter, Flickr, and YouTube present opportunities and challenges to study collective behavior on a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social-dimension-based approach has been shown effective in addressing the heterogeneity of connections presented in social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands of actors. The scale of these networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the proposed approach can efficiently handle networks of millions of actors while demonstrating a comparable prediction performance to other nonscalable methods.

Keywords:- IEEE Project 2012, Data Mining Titles, Wireless Communication Titles, Networking Tiles.

QUERY PLANNING FOR CONTINUOUS AGGREGATION QUERIES OVER A NETWORK OF DATA AGGREGATORS


ABSTRACT:-

Continuous queries are used to monitor changes to time varying data and to provide results useful for online decision making. Typically a user desires to obtain the value of some aggregation function over distributed data items, for example, to know value of portfolio for a client; or the AVG of temperatures sensed by a set of sensors. In these queries a client specifies a coherency requirement as part of the query. We present a low-cost, scalable technique to answer continuous aggregation queries using a network of aggregators of dynamic data items. In such a network of data aggregators, each data aggregator serves a set of data items at specific coherencies. Just as various fragments of a dynamic webpage are served by one or more nodes of a content distribution network, our technique involves decomposing a client query into subqueries and executing subqueries on judiciously chosen data aggregators with their individual subquery incoherency bounds. We provide a technique for getting the optimal set of subqueries with their incoherency bounds which satisfies client query's coherency requirement with least number of refresh messages sent from aggregators to the client. For estimating the number of refresh messages, we build a query cost model which can be used to estimate the number of messages required to satisfy the client specified incoherency bound. Performance results using real-world traces show that our cost-based query planning leads to queries being executed using less than one third the number of messages required by existing schemes.

Keywords: IEEE Project 2012, Data Mining Titles, Networking Titles, Cloud Computing, wireless Communcation.

A Generalized Logarithmic Image Processing Model Based on the Gigavision Sensor Model



ABSTRACT:-

The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

Keywords: IEEE Project 2012, Image Processing Title, Data Mining Title, Cloud computing, Networking.

A Complete Processing Chain for Shadow Detection and Reconstruction in VHR Images


ABSTRACT
The presence of shadows in very high resolution (VHR) images can represent a serious obstacle for their full exploitation. This paper proposes to face this problem as a whole through the proposal of a complete processing chain, which relies on various advanced image processing and pattern recognition tools. The first key point of the chain is that shadow areas are not only detected but also classified to allow their customized compensation. The detection and classification tasks are implemented by means of the state-of-the-art support vector machine approach. A quality check mechanism is integrated in order to reduce subsequent misreconstruction problems. The reconstruction is based on a linear regression method to compensate shadow regions by adjusting the intensities of the shaded pixels according to the statistical characteristics of the corresponding nonshadow regions. Moreover, borders are explicitly handled by making use of adaptive morphological filters and linear interpolation for the prevention of possible border artifacts in the reconstructed image. Experimental results obtained on three VHR images representing different shadow conditions are reported, discussed, and compared with two other reconstruction techniques.

Keywords: IEEE Project 2012, Networking Titles, Data Mining Titles, Cloud Computing Title, Image Processing, Wireless Networking .

TAM A Tiered Authentication of Multicast Protocol for Ad-Hoc Networks


ABSTRACT

Multicast streams are the dominant application traffic pattern in many mission critical ad-hoc networks. The limited computation and communication resources, the large scale deployment and the unguaranteed connectivity to trusted authorities make known security solutions for wired and single-hop wireless networks inappropriate for such application environment. This paper promotes a novel Tiered Authentication scheme for Multicast traffic (TAM) for large scale dense ad-hoc networks. Nodes are grouped into clusters. Multicast traffic within the same cluster employs one-way chains in order to authenticate the message source. Cross-cluster multicast traffic includes a message authentication codes (MACs) that are based on a set of keys. Each cluster uses a unique subset of keys to look for its distinct combination of valid MACs in the message in order to authenticate the source. TAM thus combines the advantages of the secret information asymmetry and the time asymmetry paradigms and exploits network clustering to reduce overhead and ensure scalability. The numerical and analytical results demonstrate the performance advantage of TAM.

Keywords: IEEE Project 2012, Networks Title, Image Processing Title, Data Mining Title, Cloud Computing Title. 

LOCAL BROADCAST ALGORITHMS IN WIRELESS AD HOC NETWORKS: REDUCING THE NUMBER OF TRANSMISSIONS



ABSTRACT
There are two main approaches, static and dynamic, to broadcast algorithms in wireless ad hoc networks. In the static approach, local algorithms determine the status (forwarding/nonforwarding) of each node proactively based on local topology information and a globally known priority function. In this paper, we first show that local broadcast algorithms based on the static approach cannot achieve a good approximation factor to the optimum solution (an NP-hard problem). However, we show that a constant approximation factor is achievable if (relative) position information is available. In the dynamic approach, local algorithms determine the status of each node "on-the-fly” based on local topology information and broadcast state information. Using the dynamic approach, it was recently shown that local broadcast algorithms can achieve a constant approximation factor to the optimum solution when (approximate) position information is available. However, using position information can simplify the problem. Also, in some applications it may not be practical to have position information. Therefore, we wish to know whether local broadcast algorithms based on the dynamic approach can achieve a constant approximation factor without using position information. We answer this question in the positive-we design a local broadcast algorithm in which the status of each node is decided "on-the-fly” and prove that the algorithm can achieve both full delivery and a constant approximation to the optimum solution.

Keywords: IEEE Project 2012, Wireless Networking, Cloud Computing, Image Processing, Data Mining. 


IMPROVING QOS IN HIGH-SPEED MOBILITY USING BANDWIDTH MAPS


ABSTRACT
It is widely evidenced that location has a significant influence on the actual bandwidth that can be expected from Wireless Wide Area Networks (WWANs), e.g., 3G. Because a fast-moving vehicle continuously changes its location, vehicular mobile computing is confronted with the possibility of significant variations in available network bandwidth. While it is difficult for providers to eliminate bandwidth disparity over a large service area, it may be possible to map network bandwidth to the road network through repeated measurements. In this paper, we report results of an extensive measurement campaign to demonstrate the viability of such bandwidth maps. We show how bandwidth maps can be interfaced with adaptive multimedia servers and the emerging vehicular communication systems that use on-board mobile routers to deliver Internet services to the passengers. Using simulation experiments driven by our measurement data, we quantify the improvement in Quality of Service (QoS) that can be achieved by taking advantage of the geographical knowledge of bandwidth provided by the bandwidth maps. We find that our approach reduces the frequency of disruptions in perceived QoS for both audio and video applications in high-speed vehicular mobility by several orders of magnitude.

Keywords : IEEE 2012, Data Mining Title, Image Processing Title, Cloud Computing Title, Networking Title.

Thursday, 5 July 2012

FESCIM: FAIR, EFFICIENT, AND SECURE COOPERATION INCENTIVE MECHANISM FOR MULTIHOP CELLULAR NETWORKS




ABSTRACT
In multihop cellular networks, the mobile nodes usually relay others' packets for enhancing the network performance and deployment. However, selfish nodes usually do not cooperate but make use of the cooperative nodes to relay their packets, which has a negative effect on the network fairness and performance. In this paper, we propose a fair and efficient incentive mechanism to stimulate the node cooperation. Our mechanism applies a fair charging policy by charging the source and destination nodes when both of them benefit from the communication. To implement this charging policy efficiently, hashing operations are used in the ACK packets to reduce the number of public-key-cryptography operations. Moreover, reducing the overhead of the payment checks is essential for the efficient implementation of the incentive mechanism due to the large number of payment transactions. Instead of generating a check per message, a small-size check can be generated per route, and a check submission scheme is proposed to reduce the number of submitted checks and protect against collusion attacks. Extensive analysis and simulations demonstrate that our mechanism can secure the payment and significantly reduce the checks' overhead, and the fair charging policy can be implemented almost computationally free by using hashing operations.

Keywords: IEEE Project 2012, Data Mining Title, Networking Title, Cloud Computing Title, Image Processing.


DSDMAC DUAL SENSING DIRECTIONAL MAC PROTOCOL FOR AD HOC NETWORKS WITH DIRECTIONAL ANTENNAS


ABSTRACT

Applying directional antennas in wireless ad hoc networks can theoretically achieve higher spatial multiplexing gain and, thus, higher network throughput. However, in practice, deafness, hidden-terminal, and exposed terminal problems are exaggerated with directional antennas, and they cause the degradation of the overall network performance. Although there are several random-access-based medium-access control (MAC) protocols being proposed in the literature for networks with directional antennas, the deafness, hidden-terminal, and exposed terminal problems have yet to be fully solved. In this paper, we present a new MAC protocol called the dual-sensing directional MAC (DSDMAC) protocol for wireless ad hoc networks with directional antennas. Different from existing protocols, the DSDMAC protocol relies on the dual-sensing strategy to identify deafness, resolve the hidden-terminal problem, and avoid unnecessary blocking. The integrity of the DSDMAC protocol is verified and validated using , which is a formal protocol verification and validation tool. We further develop an analytical framework to quantify the performance of the DSDMAC protocol and conduct extensive simulations, which verify the accuracy of the analysis. The protocol verification, analysis, and simulation results show the robustness and superior performance of the DSDMAC protocol, which can achieve a much higher network throughput and lower delay utilizing the spatial multiplexing gain of the directional antennas. The results presented in this paper show that the proposed DSDMAC protocol can substantially outperform the state-of-the-art protocols.

Keywords: IEEE Project 2012, Data Mining Title, Networking Title, Mobile Computing Title.

Wednesday, 4 July 2012

CORMAN: A NOVEL COOPERATIVE OPPORTUNISTIC ROUTING SCHEME IN MOBILE AD HOC NETWORKS

ABSTRACT
The link quality variation of wireless channels has been a challenging issue in data communications until recent explicit exploration in utilizing this characteristic. The same broadcast transmission may be perceived significantly differently, and usually independently, by receivers at different geographic locations. Furthermore, even the same stationary receiver may experience drastic link quality fluctuation over time. The combination of link-quality variation with the broadcasting nature of wireless channels has revealed a direction in the research of wireless networking, namely, cooperative communication. Research on cooperative communication started to attract interests in the community at the physical layer but more recently its importance and usability have also been realized at upper layers of the network protocol stack. In this article, we tackle the problem of opportunistic data transfer in mobile ad hoc networks. Our solution is called Cooperative Opportunistic Routing in Mobile Ad hoc Networks (CORMAN). It is a pure network layer scheme that can be built atop off-the-shelf wireless networking equipment. Nodes in the network use a lightweight proactive source routing protocol to determine a list of intermediate nodes that the data packets should follow en route to the destination. Here, when a data packet is broadcast by an upstream node and has happened to be received by a downstream node further along the route, it continues its way from there and thus will arrive at the destination node sooner. This is achieved through cooperative data communication at the link and network layers. This work is a powerful extension to the pioneering work of ExOR. We test CORMAN and compare it to AODV, and observe significant performance improvement in varying mobile settings.

Keywords: Networking, Image Processing, Cloud Computing, Data Mining, Mobile Computing.

CONNECTIVITY OF MULTIPLE COOPERATIVE COGNITIVE RADIO AD HOC NETWORKS

ABSTRACT

In cognitive radio networks, the signal reception quality of a secondary user degrades due to the interference from multiple heterogeneous primary networks, and also the transmission activity of a secondary user is constrained by its interference to the primary networks. It is difficult to ensure the connectivity of the secondary network. However, since there may exist multiple heterogeneous secondary networks with different radio access technologies, such secondary networks may be treated as one secondary network via proper cooperation, to improve connectivity. In this paper, we investigate the connectivity of such a cooperative secondary network from a percolation-based perspective, in which each secondary network's user may have other secondary networks' users acting as relays. The connectivity of this cooperative secondary network is characterized in terms of percolation threshold, from which the benefit of cooperation is justified. For example, while a noncooperative secondary network does not percolate, percolation may occur in the cooperative secondary network; or when a noncooperative secondary network percolates, less power would be required to sustain the same level of connectivity in the cooperative secondary network.

Keywords: Networking, Image Processing, Cloud Computing, Data Mining, Mobile Computing.

CAPACITY SCALING OF WIRELESS AD HOC NETWORKS SHANNON MEETS MAXWELL

ABSTRACT

In this paper, we characterize the information-theoretic capacity scaling of wireless ad hoc networks with randomly distributed nodes. By using an exact channel model from Maxwell's equations, we successfully resolve the conflict in the literature between the linear capacity scaling by Özgür and the degrees of freedom limit given as the ratio of the network diameter and the wavelength by Franceschetti In dense networks where the network area is fixed, the capacity scaling is given as the minimum of and the degrees of freedom limit to within an arbitrarily small exponent. In extended networks where the network area is linear in , the capacity scaling is given as the minimum of and the degrees of freedom limit to within an arbitrarily small exponent. Hence, we recover the linear capacity scaling by Özgür if in dense networks and if in extended networks. Otherwise, the capacity scaling is given as the degrees of freedom limit characterized by Franceschetti For achievability, a modified hierarchical cooperation is proposed based on a lower bound on the capacity of multiple-input multiple-output channel between two node clusters using our channel model.

Keywords: Networking, Image Processing, Cloud Computing, Data Mining, Mobile Computing.

A TRIGGER IDENTIFICATION SERVICE FOR DEFENDING REACTIVE JAMMERS IN WSN

ABSTRACT

During the last decade, Reactive Jamming Attack has emerged as a great security threat to wireless sensor networks, due to its mass destruction to legitimate sensor communications and difficulty to be disclosed and defended. Considering the specific characteristics of reactive jammer nodes, a new scheme to deactivate them by efficiently identifying all trigger nodes, whose transmissions invoke the jammer nodes, has been proposed and developed. Such a trigger-identification procedure can work as an application-layer service and benefit many existing reactive-jamming defending schemes. In this paper, on the one hand, we leverage several optimization problems to provide a complete trigger-identification service framework for unreliable wireless sensor networks. On the other hand, we provide an improved algorithm with regard to two sophisticated jamming models, in order to enhance its robustness for various network scenarios. Theoretical analysis and simulation results are included to validate the performance of this framework.

Keywords: Networking, Image Processing, Cloud Computing, Data Mining, Mobile Computing.


A STATISTICAL MECHANICS-BASED FRAMEWORK TO ANALYZE AD HOC NETWORKS WITH RANDOM ACCESS



ABSTRACT

Characterizing the performance of ad hoc networks is one of the most intricate open challenges; conventional ideas based on information-theoretic techniques and inequalities have not yet been able to successfully tackle this problem in its generality. Motivated thus, we promote the totally asymmetric simple exclusion process (TASEP), a particle flow model in statistical mechanics, as a useful analytical tool to study ad hoc networks with random access. Employing the TASEP framework, we first investigate the average end-to-end delay and throughput performance of a linear multihop flow of packets. Additionally, we analytically derive the distribution of delays incurred by packets at each node, as well as the joint distributions of the delays across adjacent hops along the flow. We then consider more complex wireless network models comprising intersecting flows, and propose the partial mean-field approximation (PMFA), a method that helps tightly approximate the throughput performance of the system. We finally demonstrate via a simple example that the PMFA procedure is quite general in that it may be used to accurately evaluate the performance of ad hoc networks with arbitrary topologies.

Keywords: Networking, Image Processing, Cloud Computing, Data Mining, Mobile Computing.