Kolkata, India Kolkata, India

TUTORIALS

Tutorial Abstracts

Tutorial 1: Side Channels in Cryptography
Dr. Debdeep Mukhopadhyay
IIT Kharagpur, India
Abstract:

Security is ubiquitous. With the advent of ecommerce and electronic transactions, the need for development of secured systems has grown tremendously. However history has taught us designing strong cryptographic algorithms is just the beginning. Although the internal algorithms are robust against conventional cryptanalysis, the implementations of the ciphers may leak valuable secret information, through covert channels commonly referred to as side channels. The side channels can be in the form of time required, power consumed, behavior under faults and several others.

The tutorial starts with the understanding of how side channels can be exploited for knowing the secret key in ciphers. We explain various forms of side channel attacks, namely simple and differential power analysis, fault analysis, timing attacks using cache architectures with respect to the implementations of state of the art encryption algorithms. We shall then cover possible countermeasures against the above forms of attacks. We conclude with recent formal models for leakage resilient cryptography.

Biography:

Debdeep Mukhopadhyay is working as an Assistant Professor in the Dept of Computer Sc and Engg, IIT Kharagpur, India. He completed his PhD in 2007 and Masters in 2004 from the Dept of Computer Sc and Engg, IIT Kharagpur. He obtained his BTech degree in the Dept of Electrical Engg, IIT Kharagpur in the year 2001. His present research interests are in the field of Cryptography, Cryptanalysis and VLSI.

Tutorial 2: Web Applications Security Testing Methodologies
Nibin Varghese, Abhisek Datta, Abhinav Shrivastava
iViZ Techno Solutions Pvt. Ltd., Kolkata, India
Abstract:

The web has become an integral part of every internet user's life. Along with the development of rich web application and complex web technologies, security is an important concern for web applications and technologies used in building web applications.

Relating with developments in Network Security Testing and Scanning mechanisms, sufficient development has also taken place in the area of web application security testing. Comprehensive testing guides are available from the OWASP project which covers almost all possible publicly known classes of web vulnerability testing. One of the primary reason for insecurities in Web Application is lack of awareness of various web application security issues among developers.

This presentation will mainly focus on the various aspects of web application security and a moderately comprehensive methodology demonstrating the various tools and techniques used in a complete web application security audit. Various problems associated with automating Web Application Security Testing will also be discussed.

Biography: Nibin Varghese:

Nibin Varghese is a Security Research Engineer with iViZ Techno Solutions Pvt Ltd, Kolkata and is responsible for automation of tools to identify web application and network vulnerabilities. Nibin was a Clubhack 2008 speaker where he presented the paper, "Reverse Engineering for Exploit Writers". Previously, Nibin was working in Ernst & Young SSC, India and was responsible for providing Technology and Security Risk Services to clients in the Middle East region. Nibin has more than 3 years of experience in providing IT and Information Security related services to clients in various industry segments.

Biography: Abhisek Datta:

Abhisek Datta is the Team Lead of Security Research at iViZ Techno Solutions. He has expertise in Vulnerability Research, Vulnerability Analysis, Exploit Development and Security Software Development. His core area of expertise lies in vulnerability analysis and exploits development for Win32 and Linux platform. He has audited various software and successfully discovered new exploitable vulnerabilities in many leading products.

Previously he was involved in core design and development of iViZ flagship on-demand Penetration Testing product and service. He contributes in various strategic & complex Penetration Testing assignments, particularly for auditing target systems/software/applications with unconventional methodologies.

Tutorial 3: Role Engineering and Role Mining
Dr. Jaideep Vaidya
Rutgers University, U.S.A.

Abstract: Today, Role Based Access Control (RBAC) is the de facto model used for advanced access control, and is widely deployed in diverse enterprises of all sizes. As a result, RBAC has become the norm in many of today's organizations for enforcing security. Basically, a role is nothing but a set of permissions. Roles represent organizational agents that perform certain job functions within the organization. Users, in turn, are assigned appropriate roles based on their qualifications. However, one of the major challenges in implementing RBAC is to define a complete and correct set of roles. This process, known as role engineering, has been identified as one of the costliest components in realizing RBAC. Essentially, role engineering is the process of defining roles and assigning permissions to them. Given the predominance of RBAC, effective role engineering is a must to realize the full benefits of RBAC.

There are two basic approaches towards role engineering: top-down and bottom-up. Under the top-down approach, roles are defined by carefully analyzing and decomposing business processes into smaller units in a functionally independent manner. These functional units are then associated with permissions on information systems. In other words, this approach begins with defining a particular job function and then creating a role for this job function by associating needed permissions. Often, this is a cooperative process where various authorities from different disciplines understand the semantics of business processes of one another and then incorporate them in the form of roles. Since there are often dozens of business processes, tens of thousands of users and millions of authorizations, this is rather a difficult task. Therefore, relying solely on a top-down approach in most cases is not viable, although some case studies indicate that it has been done successfully by some organizations (though at a high cost).

In contrast, since organizations do not exist in a vacuum, the bottom-up approach utilizes the existing permission assignments to formulate roles. Starting from the existing permissions before RBAC is implemented, the bottom-up approach aggregates these into roles. It may also be advantageous to use a mixture of the top-down and the bottom-up approaches to conduct role engineering. While the top-down model is likely to ignore the existing permissions, a bottom-up model may not consider business functions of an organization. However, the bottom-up approach excels in the fact that much of the role engineering process can be automated. Role mining can be used as a tool, in conjunction with a top-down approach, to identify potential or candidate roles which can then be examined to determine if they are appropriate given existing functions and business processes. Such a role mining tool can also be used to reexamine and optimize the existing permissions in RBAC.

In the past several years there has been renewed interest in this area. There have been several attempts to propose good bottom-up techniques to finding roles. More recently, researchers have begun formally defining the role mining problem (RMP) and proposed a number of RMP variants. The basic-RMP problem has been shown to be equivalent to a number of known problems including matrix decomposition and minimum biclique cover problem in graphs, among others. The goal of the basic-RMP is to identify a minimum set of roles that can correctly reconstitute the given state of the organization. While minimizing the number of roles is a clear cut way of reducing the burden of the security administrator, other role mining problems have been defined with different minimization objectives. One objective is to discover roles in such a way that the total number of user-to-role assignments and role-to-permission assignments (|UA|+|PA|) is minimal. This, known as the Edge-RMP, has been studied by Vaidya et al., Ene et al. and Zhang et al. Other variants focus on discovering optimal roles as well as role hierarchies. Inexact variants of the RMP have also been proposed. Along with data mining and graph-theoretic solutions, the problem has also been probabilistically modeled and solutions proposed. In this tutorial, we will present all of the relevant concepts and techniques that have been developed in this area, and present existing challenges and directions for future work.

Biography: Jaideep Vaidya is an Assistant Professor of Computer Information Systems at Rutgers University. Jaideep has received a Bachelor's degree from the University of Mumbai, India and a Masters and Ph.D. from Purdue University in 2004. His primary research interests are in the fields of Privacy, Security, Data Mining, and Databases. He has authored over 50 publications and received best paper awards at premier conferences such as ACM KDD and IEEE ICDE. He is the recipient of the NSF CAREER Award in 2008, and has also received the Junior Faculty Research Award from Rutgers Business School in 2009. His detailed cv can be found at: http://cimic.rutgers.edu/~jsvaidya/cv

Tutorial 4: Controlling Access to Digital Libraries
Dr. Aditya Bagchi
ISI Kolkata, India

Abstract:

A Digital Library (DL) provides the facilities of collecting, cataloging, searching and disseminating information to emulate and extend the services available in a conventional library. Not only a digital library must accomplish all essential services of a traditional library, it should also exploit the advantages of digital computing, storage, searching, and communication. Though efforts to switch from simple bibliographic search to Digital Library started earlier, serious research and development in this area proliferated with the availability of Internet and the World Wide Web (WWW). A set of workshops in 1994, defined the basic needs and architecture of a DL, primarily encouraged by the NSF's Digital Library Initiative. The basic components identified for a DL are:

  • To provide access to collections of multimedia information built upon the integration of text, image, graphics, audio, video and any other continuous media.
  • Representation of information content in an organized way so that users can identify and select both from among and within various information resources.
  • Navigation through and retrieval from both representational and primary information.
  • Presentation of both representational and primary information to users.

Since these components are interrelated, approaches and methods developed for any one component need to be tested in conjunction with the others so that the entire access mechanism becomes cost-effective. Now, the network infrastructure for a DL makes all resources potentially available to anyone with network access. However, users have varying needs and have different preferences for identifying, locating, selecting, retrieving, receiving and using information. If a DL blindly emulates a conventional library, it would provide the same user interface to all users. Depending on different cognitive styles and information-seeking habits, a DL should be able to provide different interfaces to different user groups.

The issue of appropriate representation of information resources for successful identification and selection of resources and the information they contain is fundamental to successful access to digital libraries. In the networked environment, large numbers of digital resources providing multimedia information are spread over the network. So, it is difficult for users to know what resources are available and how they differ in terms of scope and content. An approach to this problem is the design and development of metadata for accessing digital libraries. Metadata are representations of the structure, organization and content of information resources. Metadata must allow the users to search and identify appropriate resources for addressing their information needs, to select from among the possible appropriate resources, to select pertinent information from such resources and then to combine information from multiple resources (as needed) in a valid way, all prior to actual retrieval of information from the selected resources.

Turn of the millennium has seen a significant change in the representation of Digital Libraries. In order to avoid inadequate high-level cognition support and knowledge sharing facility, the DL model moves beyond simple information searching and browsing across multiple repositories, to inquiry of knowledge about the contents of digital libraries. This approach extended the traditional keyword-based indexing and searching to knowledge-based search by adding knowledge to the documents. As a result, a DL structure became two-layered with knowledge or conceptual layer and document layer. Along with many proposed ontological structures for a digital library, a formal Digital Library Ontology model, popularly known as 5S model has been proposed.

Controlled access to digital library has always been an important area of research. However, approach to the problem kept on changing with the change in the DL architecture. Initially, access control issues to a digital library were mainly related to Digital Rights Management. However, since a digital library is usually implemented in a networked environment, access control to different participating sites is also important. Ontological model has given rise to many other access control issues, since one concept may refer to many documents and one document can again be referred from more than one concept.

Starting from early models of digital libraries, this tutorial would describe the salient features of the DL architectures along with their access control requirements. It would cover such requirements at the document level, metadata level and also at the conceptual level. Some open problems would be discussed and possible research directions would also be highlighted.

Biography: Aditya Bagchi got his Ph.D.in Engineering from Jadavpur University,India. After serving in various industries in India and USA, he joined the Indian Statistical Institute where he is now the Dean of Studies. Prof.Bagchi's research interest covers Data modeling for large graphs, Social and Biological Network in particular, Development of data mining algorithms, association and dissociation rules in particular and Design of access control models for different application areas. In his areas of research, Prof. Bagchi has published many papers in International journals and peer-reviewed conferences. Prof. Bagchi has also delivered invited lectures and tutorials in many universities, research labs, workshops and conferences in India, Europe and USA. Prof. Bagchi is also serving as advisor to many Govt. Departments and Projects in India particularly for E-Governance and Data Security related issues.