Please note that there will be no distinction in the conference proceedings between oral and poster presentations.
Each presentation will take 20 minutes: 17 minutes for the oral presentation and 3 minutes for questions. Please strictly observe this time limit in order to facilitate people moving between sessions. Each room is equipped with a projector, a PC laptop and a screen. However, authors can bring their own laptop or borrow one from another author presenting in the same session. Presenters should arrive at their session before the session starts and check that their slides work satisfactorily with the audio visual system in the room. If not, they should notify any problem either to the session chair or the staff.
Posters will be presented in a poster session that will take place separately to oral sessions (time and location of this session will be announced in the conference program). A poster-board will be available for each poster presenter. Each poster-board has the following dimensions: 1.0 meters wide x 2.35 meters high. Thus, please print out a poster with appropriate dimensions to fit these boards. Each poster-board will have a paper number indicating the assigned poster paper. Please check this info before placing your poster, to avoid any possible overlaps with another poster presenter. The organisation will provide adhesive tapes for holding the posters. Poster presenters are kindly required to remove their posters, once the session is over.
Donostia/San Sebastian is a coastal medium-size city (183,000 inhabitants) located in northern Spain, 20km from the border with France. It stands as an amphitheatre over-looking the sea. It is known the world over for its spectacular bay and is referred to as the Pearl of the Cantabrian Sea.
The unparalleled beauty, cosmopolitan atmosphere, and high standards offered to everyone who comes to visit Donostia/San Sebastian have turned the city into a world-class conference and meeting destination. The city is small enough to be pleasantly negotiated on foot or by bike, so business travellers will enjoy walking or cycling around to make the most of their trip. The city boasts a wide range of pedestrian areas, parks, landscaped gardens, and wonderful walks along the sea and the river, which makes it really easy to get around on foot or by bike to visit the city’s sights.
Donostia / San Sebastián can be proud of the high quality of all of its tourist resources and services. In its relentless pursuit of excellence in tourism, the city has over one hundred service providers, facilities, and resources that have been awarded quality seals: One of the 25 best beaches in the World 2012 (Travellers ́ Choice), Best destination in Spain - Golden List 2013 (Condé Nast Traveller), Best quality destination in Spain 2012 (SICTED), One of 15 best travel destinations 2012 (Tripadvisor). All that joined to the wide variety of cultural activities have driven the city to be awarded with the European Capital of Culture 2016.
Last but not least Donostia/San Sebastian is internationally renowned for being a culinary haven. Here visitors will find more Michelin stars per square metre than in any other city in the world. Furthermore, the city witnessed the renaissance of Basque cuisine, courtesy of the “new Basque cuisine” movement. Donostia/San Sebastian’s top-quality foods and mouth-watering “pintxos” or morsels are made available to visitors all year round.
The conference will take place at the Kursaal Convention Center and Auditorium. Kursaal is located in the city centre, overlooking the seafront. This avant-garde architectural showpiece was designed by Rafael Moneo and won the Mies van der Rohe prize for the best building in Europe in 2001. Fully equipped with cutting-edge technology, its 2 auditoriums and multi-purpose modular meeting-rooms offer the best location for all kind of meetings, from small business meetings to big congresses, as well as product launches, conventions and exhibitions. The main hotels, restaurants and shopping areas are within walking distance. A team led by 2-Michelin-star chef Andoni Luis Adúriz is in charge of our on-site catering service.
Donosti is a small city and there are three ways of moving around, walking, by bus or by bicycle. The city bus service is DBUS, and it can be used to go from the different hotels to the congress venue. You can find more information (timetable, stations, etc.) in their web page
The reception will take place in Miramar Palace, which is 30 min away from Kursaal (see the map bellow)
Donostia is a very popular city and booking a room in summer can be difficult, so we advice any researcher that want to attend to the conference to do the booking with enough time.
The organization of the conference has pre-reserved 100 rooms of two hotels. 50 rooms have been pre-reserved in the Amara Plaza Hotel. All the rooms are double and the price for individual use is 140€ and for double use 150€. In both cases the breakfast and VAT are included in the price. If you are interestind in booking one of these rooms, put in contact with the hotel and use the promotional code IEEE-CEC.
30 more rooms have been pre-reserved in the Codina Hotel (already booked!!). The prices are, for individual rooms, 75.5€/night (86.85€/night with breakfast) and for double rooms 87.3€/night (108€/night with breakfast). All the prices include VAT. If you are interestind in booking one of these rooms, put in contact with the hotel and use the promotional code CEC2017.The last 20 rooms are prereserved in two more hotels, 10 in each one. The first one is Hotel Arrizul Congress, located 150m from the venue. The price for the single room (5 available) in this hotel is 152€ and for the double room (single or double use) 190€. The second one is Hotel Arrizul Urumea. In this case the price for the double room for single use is 110€ and for double use 120€. In all the cases the breakfast and VAT are included in the price. For these two hotels the contact is Danel Zulaika and the discount code is CEC 2017 Kursaal
For a complete list of sleeping options in Donostia (as well as many other useful information), please visit the touristic website of the city council.
General Chair
General Chair
Program Chair
Program Chair
Finance & Local Chair
Finance & Local Chair
Special Sessions Chair
Special Sessions Chair
Tutorials Chair
Tutorials Chair
Competitions Chair
Competitions Chair
Proceedings Chair
Proceedings Chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Technical Co-chair
Conflict-of-interest Chair
Conflict-of-interest Chair
Conflict-of-interest Chair
Conflict-of-interest Chair
Conflict-of-interest Chair
Conflict-of-interest Chair
Local Co-chair & Webmaster
Local Co-chair & Webmaster
Local Co-chair
Local Co-chair
Paper can be submitted through the following link (please note that there is no need to sign up in the system, so partial submissions cannot be saved). The extended deadline for the submissions is January 30, 2017 (24:00 EST). Please note that no further extensions will be granted.
Please read the following paper submission guidelines before submitting your papers.
All papers must be submitted through the IEEE CEC 2017 online submission system. For special session papers, please select the respective special session title under the list of research topics in the submission system.
In order for your papers to be included in the congress program and the proceedings, final accepted papers must be submitted and the corresponding registration fees must be paid by March 31, 2017 (24:00 EST).
All the accepted papers requery a full (not student) registration. If you have any problem or question regarding the submission of your papers, please contact us at submission@cec2017.org.
To help ensure correct formatting, please use the IEEE conference paper templates (US Letter). Please prepare the manuscripts following the template instructions. Any deviation from them may result in the rejection of your paper. Only papers prepared in PDF format will be accepted.
The default payment methods is by Mastercard/VISA credit card. For anyone having problems, money transfer will be made available. For more information, please contact the conference organization at info@cec2017.org.
Attendants who cancel their registration until May 1, 2017 will obtain a 100% refund of the paid fee, less an administrative charge of 100€. After that date no cancellations are possible. Registration fees covering the publication of one or more papers cannot be cancelled.
Category | Early registration | Regular registration |
---|---|---|
IEEE Member | 675€ | 800€ |
Student, IEEE Member | 350€ | 400€ |
Non IEEE Member | 775€ | 900€ |
Student, non IEEE Member | 450€ | 500€ |
Life Member | 350€ | 350€ |
Multiobjective Evolutionary Computation has been a major research topic in the field of evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multiobjective optimization solver. Decomposition methods have been well used and studied in traditional multiobjective optimization. It is well known that the Pareto optimal solution set of a continuous multiobjective problem often exhibits some regularity. In this talk, I will describe two multiobjective evolutionary algorithms: MOEA/D and RM-MEDA. Both of them borrow ideas from traditional optimization. MOEA/D decomposes a multiobjective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multiobjective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years. RM-MEDA makes use of the regularity property to model the distribution of Pareto optimal solutions in the search space, and then generates new solutions from the model thus built. Machine learning techniques can be readily used in RM-MEDA. I will explain the basic ideas behind these two algorithms and some recent developments. I will also outline some possible research issues in multiobjective evolutionary computation.
Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. MOEA/D, a multiobjective optimization algorithm developed in his group, won the Unconstrained Multiobjective Optimization Algorithm Competition at the Congress of Evolutionary Computation 2009, and was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 highly cited researchers in computer science.
Currently Evolutionary Computation (EC) is used only occasionally for the design of industrial products for the communications industry. The reasons for this are manifold: Firstly, in most cases domain specific design techniques are available and also much better known to the engineers, because these traditional techniques are taught throughout the universities while EC curricula are rare. Secondly, many EC methods are not simple enough to understand to justify them being applied. Given that convergence of EC is generally not guaranteed, the risk of spending time for an unfamiliar method is not taken. So today EC does not yet belong to the standard “bag of tricks” an engineer is equipped with.Thirdly, undertaking a design by employing EC comes with fairly long convergence times compared to traditional techniques, mostly because the latter are tuned to exploit domain properties and hence are specialized for the problem at hand. In some instances, however, traditional approaches do not yield the desired result or cannot solve the problem at all so that EC techniques have to come to the rescue. I will provide some real-world examples that have been designed with Differential Evolution (DE) and that have entered real products. I will explain why DE has been used in these cases, how the problems were solved, and what is required for EC methods to gain a stronger foothold in industry, hopefully spawning a new research direction to benefit both EC in general as well as industrial applications.
Dr. Rainer Storn is Director of R&D Platform Software at Rohde & Schwarz GmbH & Co KG, Germany. In 1994 he was granted a Post-Doc Fellowship followed by a Senior Visitor Leave grant at the “International Computer Science Institute” (ICSI) in Berkeley, USA, where Ken Price and himself invented Differential Evolution (DE). Rainer Storn is the author of 13 patents, more than 44 papers and articles and coauthor or contributor of five books, among them Differential Evolution – A Practical Approach to Global Optimization. He is reviewer of many scientific journals, among them IEEE Transactions on Signal Processing, IEEE Transactions on Evolutionary Computation, and IEEE Transactions on Systems, Man and Cybernetics. Since 2005 he has specialized on SW-defined radios (SDR) at Rohde & Schwarz and has helped build numerous professional radios especially for the airborne market. Rainer Storn has applied DE numerous times in applications for the communications and measurement industry and has helped to include DE into several commercial SW packages.
Black box optimization begins with the assumption that if nothing is known about the objective function, then there is no justifiable reason for an optimization algorithm to, e.g. preferentially search in one direction, or to favor one coordinate system orientation over another. This paper investigates whether or not classic differential evolution (DE/rand/1/bin) satisfies these black-box constraints and if not, what it takes to bring the algorithm into conformity with them. The result is an exceptionally simple algorithm, black box differential evolution (BBDE), whose performance is invariant under a coordinate system translation, an orthogonal rotation, a reflection and a permutation of parameters. Performance is also invariant under both the addition of a function bias and an order-preserving transform of the objective function. On the family of ellipsoids, its performance is invariant to both scaling and high-conditioning (eccentricity). Additionally, BBDE is free of both selection and generating drift biases. Furthermore, selection, mutation and recombination are decoupled to become independent operations, as they should be, since each performs a distinctly different function that ought not to be duplicated by another. BBDE also satisfies a few algorithm-specific, symmetry-based constraints. Like the CMA-ES, BBDE’s only control parameter is the population size. In short, BBDE appears to be the simplest DE strategy to conform to a set of symmetry-based constraints that are necessary for unbiased, i.e. black box, coptimization.
Kenneth V. Price earned his B.Sc. in physics from Rensselaer Polytechnic Institute in 1974. He briefly worked as a supervisor at the Teledyne-Gurley Scientific Instrument Company in Troy, New York before moving to San Francisco. He currently resides in Vacaville, California. An avid hobbyist, he is self-taught in the field of evolutionary computation. In 1994, he published an early ensemble annealing, threshold accepting algorithm ("genetic annealing"), which led Dr. R. Storn to challenge him to solve the Tchebyshev polynomial fitting problem. Ken’s discovery of differential mutation proved to be the key to solving not only the Tchebyshev polynomial fitting problem, but also many other difficult numerical global optimization problems. He is co-author of both the seminal paper on the differential evolution algorithm and the book “Differential Evolution: A practical approach to global optimization”. Ken has also authored or coauthored seven additional peer-reviewed papers, contributed chapters to three books on optimization and has served as a reviewer for twelve journals.
Clustering is an unsupervised exploratory data analysis tool which groups the data on the basis of some similarity/dissimilarity metric such that a pre-defined criterion is optimized. The problem of clustering is therefore essentially one of optimization. The use of metaheuristic methods like genetic algorithms has been used successfully in the past for clustering a data set. It is to be noted that the clustering problem admits a number of criteria or cluster validity indices that have to be simultaneously optimized for obtaining improved results. Hence in recent times the problem has been posed in a multiobjective optimization (MOO) framework and popular metaheuristics for multiobjective optimization have been applied. In this talk, we will first briefly discuss about the fuzzy c-means algorithm and the basic principles of MOO. Subsequently it will be shown how a popular multiobjective optimization algorithm may be used for solving the clustering problem. Since such algorithms provide a number of solutions, a way of combining the multiple clustering solutions so obtained into a single one using supervised learning will be explained. The talk will conclude by demonstrating an application of the multiobjective clustering technique on several gene expression data sets. Before explaining the application, a brief introduction to gene expression data will be provided and the importance of clustering gene expression data will be discussed.
Dr. Sanghamitra Bandyopadhyay did her B Tech, M Tech and Ph. D. in Computer Science from Calcutta University, IIT Kharagpur and ISI respectively. She is currently a Professor (Higher Administrative Grade) at the Indian Statistical Institute, Kolkata, India. She has worked in various Universities and Institutes world-wide including in USA, Australia, Germany, China, Italy and Mexico. She has delivered invited lectures in many more countries. She has authored/co-authored more than 145 journal papers and 140 articles in international conferences and book chapters, and published six authored and edited books from publishers like Springer, World Scientific and Wiley. She has also edited journals special issues in the area of soft computing, data mining, and bioinformatics. Her research interests include computational biology and bioinformatics, soft and evolutionary computation, pattern recognition and data mining. She is a Fellow of the National Academy of Sciences, Allahabad, India (NASI) and Indian National Academy of Engineering (INAE) and West Bengal Academy of Science and Technology, and senior member of the IEEE. Sanghamitra is the recipient of several prestigious awards including the Dr. Shanker Dayal Sharma Gold Medal and also the Institute Silver Medal from IIT, Kharagpur, India, the Young Scientist Awards of the Indian National Science Academy (INSA), the Indian Science Congress Association (ISCA), the Young Engineer Award of the Indian National Academy of Engineering (INAE), the Swarnajayanti fellowship from the Department of Science and Technology (DST), and the Humboldt Fellowship from Germany. She has been selected as a Senior Associate of ICTP, Italy. She was awarded the prestigious Shanti Swarup Bhatnagar Prize in Engineering Science.
Sorry, the information that you have asked for is not available yet. We will solve this problem as soon as possible.
On the one hand, computationally-hard combinatorial problems are common in practice, and many evolutionary and general-purpose heuristic algorithm classes have been proposed for both single- and multi-objective optimization. On the other hand, fitness landscape analysis is a well-established field for understanding the relation between the problem search space structure and the algorithm paradigm and components in evolutionary computation. Starting by presenting state-of-the-art tools from single-objective fitness landscapes, we identify the main differences and additional issues to be addressed for a deep understanding of multi-objective fitness landscapes. We expose and contrast the impact of fitness landscape characteristics on the performance of optimization algorithms for single- and multi-objective black-box combinatorial optimization problems. A sound and concise summary of features characterizing the structure of an arbitrary problem instance are identified and related to the expected dynamics of a number of algorithm classes to be introduced during the tutorial. Their intercorrelation and their association with algorithm performance are also analyzed. This allows us to assess the individual and the joint effect of problem features on algorithm performance, and to highlight the main difficulties encountered by optimization algorithms. By providing effective tools and practical examples for both single- and multi-objective fitness landscape analysis, further insights are given on the importance of ruggedness and multimodality to characterize the difficulty of an instance for combinatorial optimization problems and algorithms. At last, we conclude with guidelines for the design of randomized search heuristics based on the main fitness landscape features, and we identify a number of open challenges for the future of fitness landscapes and evolutionary algorithms.
Theoretical research has always accompanied the development and analysis of evolutionary algorithms, both by explaining observed phenomena in a very rigorous manner and by creating new ideas. Since the methodology of theory research is very different from experimental or applied research, non-theory researcher occasionally find it hard to understand and profit from theoretical research. Overcoming this gap in our research field is the target of this tutorial. To this aim
Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the corresponding lower level variable vector is optimal for the lower level optimization problem. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the "nestedness" of one optimization task into another. Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.
Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. Continuing from the first tutorial, this second tutorial focuses on techniques that facilitate runtime analysis of complex, population-based EAs.
We first give a brief overview of the population-based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations such as advanced drift theorems and level-based analysis.
To illustrate the application of these techniques, we consider several fundamental questions: What are some conditions that prevent EAs from optimising a function efficiently? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? When are populations necessary for efficient optimisation with EAs? What determines an EA's tolerance for uncertainty, e.g. in the form of noisy or partially available fitness?
Sophisticated optimization problems (non-convex, non-differentiable, with many local optimal, etc.) lay in many machine learning tasks. While mathematical programming methods are the mainstream tools in machine learning community, they are hard to solve problems with non-convexity. Evolutionary computation provides a set of direct search tools that make it possible to tackle non-convex optimization problems for machine learning. This tutorial intends to introduce the latest advances on how emerging machine learning problems are successfully addressed by EC techniques. On this basis, key issues that may further boost the application of EC to non-convex machine learning will also be discussed, from the perspective of both theoretical and industrial-standard applications, with the aim to shed lights on directions for future research.
For all supervised learning problems, where the quality of solutions is measured by a distance between target and output values (error), geometric semantic operators (GSOs) of genetic programming (GP) induce an error surface characterized by the absence of locally suboptimal solutions (unimodal error surface). So, GP that uses GSOs, called geometric semantic GP (GSGP), has a potential advantage in terms of evolvability compared to many other existing computational methods. This fosters GSGP as a possible new state-of-the-art machine learning method. Nevertheless, GSGP is still very young (it was first introduced in 2013) and research is still much in demand. This tutorial is oriented to researchers and students that are not familiar with GSGP, and are willing to deepen and contribute to this exciting and promising field. The first objective of this tutorial is to explain why the error surface induced by GSOs is unimodal, and why this fact is important. This will be done after a soft introduction to the important concept of fitness landscape, and its applications to GP. Then, the tutorial presents some known drawbacks of GSOs and discusses workarounds to limit those drawbacks. In particular, a very efficient implementation of these operators, that makes them usable in practice, is presented. After that, the tutorial focuses on the numerous interesting applicative results that have been obtained so far by GSGP. Among the different real-life problems that have been tackled with success using GSGP, covering several different applicative domains, the tutorial will discuss the prediction of the relative positions of computer tomography slices, the forecasting of energy consumption, the prediction of pharmacokinetic parameters in drug discovery and development, the prediction of the unified Parkinson's disease rating scale assessment and the prediction of high performance concrete strength. While discussing the results that have been obtained on these applications, the audience will also discover that some properties of GSOs that may help limiting overfitting, bestowing on GSGP a very interesting generalization ability. Finally, the tutorial suggests further readings and discusses open issues of GSGP.
New developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems (ranging from MAXSAT to the Traveling Salesman Problem), it is possible to exactly compute the location of improving moves (often in constant time), and to use highly efficient forms of greedy deterministic recombination. In some cases, this makes random mutation and random recombination unnecessary and obsolete.
Deterministic “Partition Crossover” can be applied to k-bounded pseudo-Boolean optimization problems (such as MAXSAT and NK Landscapes) as well as problems such as the Traveling Salesman Problem (TSP). Partition Crossover locally decomposes a recombination graph into q subgraphs in O(n) time. It can then identify the best of possible offspring. For example, for q=40, partition crossover returns the best of one trillion (possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents, and offspring are typically also local optima in the full space. This allows partition crossover to directly “tunnel” between local optima, moving directly from local optimum to local optimum.
For the TSP, these results can be use to improve the best existing TSP solvers (e.g., LKH and EAX). It can also be used to improve exact Branch and Bound algorithms such as Concorde.
For k-bounded pseudo-Boolean optimization problems these new algorithms are able to solve problems with 1 million variables.
Other recent results also use a similar form of local decomposition coupled with greedy and deterministic optimization. When applied to multiply constrained scheduling problems, the genetic algorithm is able to solve industrial problems with unto 1 billion variables.
These results have the potential to revolutionized evolutionary computation. The methods are decidedly not black box optimizers, and the new algorithms trivially solve many black box benchmarks: ONEMAX, Leading Ones and Trap functions are solved in O(n) time using only 1 evaluation.
Hyper-heuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flowshop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper-heuristic to provide researchers with a foundation to start their own research in this area. The tutorial will also demonstrate the use of the EvoHyp toolkit for the implementation of evolutionary algorithm selection and generation hyper-heuristics. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper-heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.
Parallel and distributed computing can be used in the design and implementation of evolutionary algorithms for speedup the search, improve the quality of the obtained solutions, improve the robustness of the obtained solutions, and solve large scale problems.
From the algorithmic design point of view, we will present the main parallel models for evolutionary algorithms (algorithmic level, iteration level, solution level). We will address also:
From the implementation point of view, we here concentrate on the parallelization of evolutionary algorithms on general-purpose parallel and distributed architectures, since this is the most widespread computational platform. The rapid evolution of technology in terms of processors (GPUs, multi-core), networks (Infiniband), and architectures (GRIDs, clusters, Clouds) make those architectures very popular nowadays.
Different architectural criteria which affect the efficiency of the implementation will be considered: shared memory / distributed memory, homogeneous / heterogeneous, dedicated / non dedicated, local network / large network. Indeed, those criteria have a strong impact on the deployment technique employed such as load balancing and fault-tolerance.
Finally, some software frameworks for parallel evolutionary algorithms such as PARADISEO are presented. Those frameworks allow the design of parallel and hybrid metaheuristics for mono-objective and multi-objective optimization, and the transparent implementation on different parallel and distributed architectures using adapted middleware.
The main objective of this tutorial will be to answer the question if particle swarm optimization (PSO) can be considered as a universal optimizer. In the context of this tutorial, this means that the PSO can be applied to a wide range of optimization problem types as well as search domain types. The tutorial will start with a very compact overview of the original, basic PSO. Some experience and background on PSO will be assumed. The tutorial will cover a classification of different problem types, and will show how PSO can be applied to solve problems of these types. This part of the tutorial will be organized in the following sections, one for each problem type:
For each problem type, it will be shown why the standard PSO can not solve these types of problems efficiently. Simple adaptations to the PSO that will allow it to solve each problem type will then be discussed. The focus will be on PSO adaptations that do not violate the foundational principles of PSO. For each of these problem types a small subset of the most successful algorithms will be discussed.
In the era of big data, the leverage of recent advances achieved in distributed technologies enables data mining techniques to discover unknown patterns or hidden relations from voluminous data in a faster way. Extracting knowledge from big data becomes a very interesting and challenging task where we must consider new paradigms to develop scalable algorithms. However, evolutionary models for machine learning and data mining cannot be straightforwardly adapted to the new space and time requirements. Hence, existing algorithms should be redesigned or new ones developed in order to take advantage of their capabilities in the big data context. Moreover, several issues are posed by real-world complex big data problems besides from computational complexity, and big data mining techniques should be able to deal with challenges such as dimensionality, class-imbalance, and lack of annotated samples among others.
In the first part of this tutorial, we will provide a brief introduction to the big data problem, including MapReduce, as the most representative programing paradigm, as well as an overview of recent technologies (Hadoop ecosystem, Spark). Then, we will dive into the field of big data analytics, explaining the challenges that come to the evolutionary models and introducing machine learning libraries such as Mahout, MLlib and FlinkML.
Afterwards, we will go across the main topic of the CEC 2017: evolutionary models in the big data context. Some cases of study will be presented for evolutionary instance selection/generation, feature selection/weighting and imbalanced data classification. Finally, we will carry out a live demonstration with the MLlib and the evolutionary models we have developed for imbalanced big data classification.
Most soft-computing, heuristic, non-deterministic or numerical methods have in common that they need a stopping criterion. This criterion, which is usually a heuristic itself, is responsible for minimizing the waste of computational resources by detecting scenarios where it makes no sense to continue executing the method. Hence, the success or failure of any practical application relies heavily on not only the techniques applied but also the support methodologies, including the stopping criterion.
Paradoxically, the matter of stopping criteria and convergence detection has been often overlooked by most of the evolutionary multi-objective optimization (EMO) community. This is probably because it plays a supporting role and, consequently, the theoretical and practical implications concerning this topic have not yet been properly studied and disseminated. However, it can be argued that many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme.In this tutorial, we propose and updated summary of the results obtained so far in this area and provide re-usable examples of how these methods should be applied in real-life practice.
As already mentioned, the relevance of the stopping criteria issue has been overlooked by the community. Consequently, many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme.
Typically, a stopping criterion is invoked at the end of an iteration of the algorithm. At that point, it is decided whether algorithm execution should continue or can be aborted. We have identified four scenarios when the execution of an algorithm should be terminated:
This tutorial aims to provide the evolutionary computation community a comprehensive reference that encompass:
By debating and presenting these topics we intend to draw the attention of the EMO community towards this issue as well as to provide usable tools and methodologies to extend the proper use of these methods.
Demonstration and familiarization activities are a fundamental part if the tutorial. In this regard, we will perform a series of “live” comparative experiments relying on the EMO stopping criteria implemented by the authors.
For this purpose, various EMO algorithms will be applied to well-known benchmark problems. The populations of each run of the algorithms will be stored and each of the criteria will be presented with them. This will allow to point out their features and drawbacks. The exercises that will be carried out interactively by the instructors and the interested public will be available on the web during the tutorial and will remain online for later use, reference and study.
The necessary software for tutorial is already available as a MATLAB/Octave EMO stopping criteria taxonomy that contains the “classical” as well as current state of the art methods. This taxonomy is available on-line as open-source software at https://github.com/lmarti/emo-stopping-criteria-taxonomy.
Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years [7], and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems [1, 8]. The recent advance in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables [2, 4]. It is this “curse-of-dimensionality” that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with black-box optimization problems.
In this tutorial, we provide an overview of recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm [5] and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks [3]. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms [6]. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.
Population or single solution search-based optimization algorithms (i.e. {meta,hyper} heuristics) in their original forms are usually designed for locating a single global solution. Representative examples include among others evolutionary and swarm intelligence algorithms. These search algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are "multi-modal" by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions, or even all of them, so that a decision maker can choose one that is most proper in his/her problem domain.
Numerous techniques have been developed in the past for locating multiple optima (global and/or local). These techniques are commonly referred to as "niching" methods. A niching method can be incorporated into a standard search-based optimization algorithm, in a sequential or parallel way, with an aim to locate multiple globally optimal or suboptimal solutions,. Sequential approaches locate optimal solutions progressively over time, while parallel approaches promote and maintain formation of multiple stable sub-populations within a single population. Many niching methods have been developed in the past, including crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In more recent times, niching methods have also been developed for meta-heuristic algorithms such as Particle Swarm Optimization, Differential Evolution and Evolution Strategies.
In this tutorial we will aim to provide an introduction to niching methods, including its historical background, the motivation of employing niching in EAs. We will present in details a few classic niching methods, such as the fitness sharing and crowding methods. We will also provide a review on several new niching methods that have been developed in meta-heuristics such as Particle Swarm Optimization and Differential Evolution. Employing niching methods in real-world situations still face significant challenges, and this tutorial will discuss several such difficulties. In particular, niching in static and dynamic environments will be specifically addressed. Following this, we will present a suite of new niching function benchmark functions specifically designed to reflect the characteristics of these challenges. Performance metrics for comparing niching methods will be also presented and their merits and shortcomings will be discussed. Experimental results across both classic and more recently developed niching methods will be analysed based on selected performance metrics. Apart of benchmark niching test functions, several examples of applying niching methods to solving real-world optimization problems will be provided. This tutorial will use several demos to show the workings of niching methods.
This tutorial is supported by the newly established IEEE CIS Task Force on Multi-modal Optimization (http://www.epitropakis.co.uk/ieee-mmo/). A recent overview of the area and its applications can be found in:
X. Li; M. Epitropakis; K. Deb; A. Engelbrecht, "Seeking Multiple Solutions: an Updated Survey on Niching Methods and Their Applications," in IEEE Transactions on Evolutionary Computation , vol.PP, no.99, pp.1-1 doi: 10.1109/TEVC.2016.2638437
Many real-world optimization problems are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic optimization problems (DOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DOPs due to their inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DOPs has attracted a lot of research effort during the last two decades with some promising results. However, this research area is still quite young and far away from well-understood. This tutorial provides an introduction to the research area of EC for DOPs and carry out an in-depth description of the state-of-the-art of research in the field. The purpose is to (i) provide detailed description and classification of DOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DOPs; (iii) present current applications in the area of EC for DOPs; (iv) analyse current gaps and challenges in EC for DOPs; and (v) point out future research directions in EC for DOPs.
Differential Evolution (DE) is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members, also known as the self-referential mutation. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a novel concept that can be applied to swarm and evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE to improve the performance of DE on multi-modal landscapes. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.
This tutorial presents a general view about constraint-handling techniques in nature-inspired optimization. Such need raises by the fact that nature-inspired algorithms, in their original versions, are designed to deal with unconstrained search spaces. Therefore, how to deal with feasibility information in those algorithms is an open problem. The tutorial starts with a set of important concepts to be used during the session. After that, the first constraint-handling techniques, mainly dominated by penalty functions, are presented. After that, the most recent efforts on constraint-handlers are detailed. Finally, a summary of the material and the current trends in nature-inspired constrained optimization are shown.
The main objective of the tutorial is to familiarize the attendees with cloud computing concepts so that, by the end of the talk, they are able to use them in the design, development and eventual deployment of evolutionary algorithms or any other metaheuristic in the cloud.
Cloud computing includes a series of technologies that imply changes in the way applications are designed and built. In the case of evolutionary computing or other metaheuristics, we are mainly talking about distributed implementations of evolutionary algorithms, but the changes go deeper and will need probably changes in the shape and design of the algorithm itself. But, at the end of the day, this tutorial addresses scientific programming in the cloud, in general, using examples drawn from our own experience in evolutionary computation.
In this tutorial we will walk through the different technologies and methodologies involved with cloud computing showing with working examples how they can be applied to evolutionary algorithms. We will also talk about different cloud service providers and how they can be used, for free or a low price, for carrying out experiments in reproducible and thus more efficient way.
In the last years, complex networks have been receiving a lot of interest by researchers because of their capability of representing the relationships among objects composing many real world systems. One of the main problems in the study of complex networks is the detection of community structure, i.e. the division of a network into groups of nodes having dense intra-connections, and sparse interconnections. In this context, Genetic Algorithms based approaches have been playing a central role because of their capability of exploring the search space and escaping from local minima during the optimization process. The objective of the tutorial is to introduce the evolutionary computation paradigm as a powerful technique for the community detection problem. The identification of a community structure in a network is formalized as an optimization problem that can be solved by using a population-based model, by applying genetic operators that allow the exploration of the search space to find a solution. Several types of networks are considered, such as undirected, directed, weighted, signed, multi-dimensional, time evolving, and the most recent proposals exploiting evolutionary techniques for finding communities in these types of networks, both non-overlapping and overlapping community structure, are described. Moreover, the differences between the different representations adopted by methods, along with single objective versus multi-objective approaches, are discussed.
The goal of this tutorial is to introduce the field of approximate computing to a wider evolutionary computing (EC) audience and present evolutionary computing as an ideal tool for solving many problems related to approximate computing. This tutorial is motivated by the fact that a new research field – approximate computing – was established in recent years to investigate how computer systems can be made better – more energy efficient, faster, and less complex – by relaxing the requirement that they are exactly correct. The current interest in approximations in computer science and engineering is primarily caused by the urgent need to reduce power consumption of computer systems ranging from low power IoT nodes, via mobile devices to supercomputers. Approximate computing addresses this challenge by exploiting the fact that some applications are inherently error resilient. Therefore, the error (accuracy of computations) can be used as a design metric and traded for performance or power consumption – creating thus a complex multi-objective optimization problem. The main advantage of the EC-based approach is that the approximation problem can be formulated as a multi-objective optimization/design problem and solved using well established multi-objective EAs, providing much better solutions than commonly used methods such as deterministic ad hoc approximations or simple greedy search algorithms. In particular, this tutorial deals with: (i) Introduction to approximate computing. (ii) Sensitivity analysis and error metrics. (iii) Approximation techniques for SW and HW components. (iv) Evolutionary circuit approximation. (v) Evolutionary software approximation. (vi) Case studies.
The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies.
It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this tutorial we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced. We will show many new systems based in the use of bio-inspired techniques to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travelers (private and public transport).
This talk will then discuss on potential applications of intelligent systems for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelligent design), and several other applications linked to smart living, tourism, and smart municipal governance.
Multiobjective optimization algorithms usually produce a set of trade-off solutions approximating the Pareto front where no solution from the set is better than any other in all objectives (this is called an approximation set). While there exist many measures to assess the quality of approximation sets, no measure is as effective as visualization, especially if the Pareto front is known and can be visualized as well. Visualization in evolutionary multiobjective optimization is relevant in many aspects, such as estimating the location, range, and shape of the Pareto front, assessing conflicts and trade-offs between objectives, selecting preferred solutions, monitoring the progress or convergence of an optimization run, and assessing the relative performance of different algorithms. This tutorial will provide a comprehensive overview of methods used in multiobjective optimization for visualizing either individual approximation sets or the probabilistic distribution of multiple approximation sets through the empirical attainment function (EAF).
The tutorial will build on the well-attended 2016 tutorial edition and extend it with a taxonomy of visualization methods and ten additional visualization methods not presented last year: aggregation trees, distance-based and dominance-based mappings, level diagrams with asymmetric norm, moGrams, polar plots, tetrahedron coordinates model, trade-off region maps, treemaps, visualization following Shneiderman mantra, and 3D-RadVis. We will demonstrate how each of the introduced methods visualizes the benchmark approximation sets. In addition, animations will be used where possible to show the additional capabilities of the visualization methods. Tutorial attendees will become aware of the many ways of visualizing approximation sets and the EAF, and learn about the advantages and limitations of each method.
In the last decade, the framework which has attracted the most attention of researchers in the evolutionary multi-objective optimization community is the decomposition-based framework. Decomposition is a well-known strategy in traditional multi-objective optimization. However, the decomposition strategy was not widely employed in evolutionary multi-objective optimization until Zhang and Li proposed multi-objective evolutionary algorithm based on decomposition (MOEA/D) in 2007. MOEA/D proposed by Zhang and Li decomposes a multi-objective optimization problem into a number of scalar optimization subproblems, and optimizes them in a collaborative manner using an evolutionary algorithm. Each subproblem is optimized by utilizing the information mainly from its several neighbouring subproblems. Since the proposition of MOEA/D in 2007, several studies have been conducted in the literature to: a) overcome the limitations in the design components of the original MOEA/D, b) improve the performance of MOEA/D, c) present novel decomposition-based MOEAs, and d) adapt decomposition-based MOEAs for different type of problems.
Investigations on the decomposition-based framework have been undertaken in several directions, including use of new decomposition approaches, efficient allocation of computational resources, modifications in the reproduction operators, mating selection and replacement mechanism, hybridizing decomposition- and dominance-based approaches, etc. Furthermore, several attempts have been made at extending the decomposition-based framework to many-objective optimization. This tutorial will present a comprehensive survey of the decomposition-based MOEAs proposed in the last decade for multi-objective and many-objective optimization. Apart from decomposition-based MOEAs, we will also discuss other recent significant works in the field of evolutionary multi-objective and many-objective optimization.
Exploration and Exploitation are critical concepts that are poorly defined, and thus often misunderstood. We begin by defining an attraction basin as all of the search points that will lead to the same local optimum when greedy local search is applied. We then define exploration to be a search movement that involves multiple attraction basins and exploitation to be a search movement that involves (i.e. stays within) only one attraction basin. Assuming every point in the search space belongs to exactly one attraction basin, we can then accurately classify every search movement as one of either exploration or exploitation.
Based on the above definitions, many search techniques (e.g. PSO, DE, ES, EDA, Simulated Annealing, etc) allow concurrent exploration and exploitation. We will present experiments which show how concurrent exploration and exploitation weakens exploration and is a primary cause of (premature) convergence. We will then introduce “Thresheld Convergence” in which convergence is “held” back through the use of a threshold function [1]. This focus on attraction basins has also led to the development of “Leaders and Followers” [2] – a new metaheuristic which focuses on how solutions are compared as opposed to how solutions are created.
This is an intermediate tutorial which is best suited for researchers already familar with Particle Swarm Optimization, Differential Evolution, and/or other metaheuristics -- especially with their performance characteristics in unimodal and multi-modal search spaces.
Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly designed and tuned for obtaining the best results on a particular problem. Automatic (offline) algorithm design methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic offline algorithm design methods may potentially lead to a paradigm shift in algorithm design because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm design techniques for specific application contexts.
This tutorial is structured into two main parts. In the first part, we will give an overview of the algorithm design and tuning problem, review recent methods for automatic algorithm design, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. In the second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus of this second part of the tutorial is, hence, on practical but challenging applications of automatic algorithm design. The second part of the tutorial will demonstrate how to tackle algorithm design tasks using our irace software (http://iridia.ulb.ac.be/irace), which implements the iterated racing procedure for automatic algorithm design. We will provide a practical step-by-step guide on using irace for the typical algorithm design scenario.
There are three airports close to the city: Donostia - San Sebastián, Bilbao and Biarritz (France).
The first one is the closest one, but it operates only small flights, mainly from Madrid and Barcelona. The other two airports have international connections with different countries.
Bilbao. Is the main hub of planes comming to the Basque Country, with daily flights from the main European capitals (London, Paris and Frankfurt, to name some). It is located 100km away from DSS and there are regular buses that connect the airport with the city. The trip takes 70 minutes and the ticket's price is about 17€.
Donostia - San Sebastián. The airport is 20km away from the city, nearby the village of Hondarribia. Two companies operate in this airport, Iberia and Vueling. Travelling from the airport to the city can be done by taxi or by bus. The taxi ride takes about 20 minutes and the cost is around 45€. The buses are scheduled hourly and the trip takes 30 minutes; the ticket's price is 12€
Biarritz. This airport is in France, 50km away from DSS. Although there are buses that connect this airport with DSS, they are not easily accessible. For that reason, the advice is taking a taxi (the ride takes some 30 minutes and costs around 105€).
The RENFE station is located 600m away from the congress venue. Although it is a small station, there are daily trains to different cities in Spain, France and Portugal. There is a second station in the the city, Euskotren, but it is for local trains.
For those travellers comming from Paris, there is a cheap alternative, the TGV.
RENFE is the spanish train company. The services from and to DSS include both national and international destinations, such as overnight services to Lisbon and Paris. There are also regular services to Madrid and Barcelona, but they typically take quite long and people tend to avoid them.
Euskotren is the basque train company and it only has services inside the Basque Country. In general, these services are a good option to move around DSS (including Hendaye, in the french border), but they are not advisable from travelling from/to Bilbao (the trip take too much time).
The TGV is the Great Speed Train service of the SNCF, the french train company. One of the stations of this network is in Hendaye, a village 30km away from DSS. A typical second class ticket from Paris to Hendaye has a price of around 50€ and the trip takes less than 6 hours. Once in Hendaye, the travellers can commute to Euskotren, arriving at DSS in less than 40 minutes (the Euskotren station is 1.5 km away from the congress venue).
There is one bus station in Donostia - San Sebastián, close to the RENFE train station.
There are bus connections with many cities both in Spain and in Europe. For a complete list you can check the options available in the station's schedule.
Bus is the best way to travel from other parts of the Basque Country (e.g., Bilbao) and, although it is not a popular way to cover long distances, it can be a cheap alternative to come from Madrid o Barcelona. For further information on these links, check the ALSA company's website.
The early registration is finished and the registration page will be temporally unavailable while we update the fees. In a few days we will open it again.
Sorry for the inconvencience.
All the pictures are available at Flickr in two different albums. From a computer, you can download all the pictures together using the albums "Download" link, or each picture individually. From other devices (such as mobile phones), the download is available in the "Share" link.