Ideal evaluation from coevolution. 

Abstract  In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. 
PMID  15157373 
Related Publications 
Populationbased continuous optimization, probabilistic modelling and mean shift. A monotonic archive for paretocoevolution. 
Authors  
Mayor MeshTerms  
Keywords  
Journal Title  evolutionary computation 
Publication Year Start  20040101 
%A de Jong, Edwin D.; Pollack, Jordan B. %T Ideal evaluation from coevolution. %J Evolutionary computation, vol. 12, no. 2, pp. 159192 %D 00/2004 %V 12 %N 2 %M eng %B In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. %K Algorithms, Biological Evolution, Computational Biology, Evaluation Studies as Topic, Models, Theoretical %P 159 %L 192 %Y 10.1162/106365604773955139 %W PHY %G AUTHOR %R 2004.......12..159D
@Article{deJong2004, author="de Jong, Edwin D. and Pollack, Jordan B.", title="Ideal evaluation from coevolution.", journal="Evolutionary computation", year="2004", volume="12", number="2", pages="159192", keywords="Algorithms", keywords="Biological Evolution", keywords="Computational Biology", keywords="Evaluation Studies as Topic", keywords="Models, Theoretical", abstract="In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown.", issn="10636560", doi="10.1162/106365604773955139", url="http://www.ncbi.nlm.nih.gov/pubmed/15157373", language="eng" }
%0 Journal Article %T Ideal evaluation from coevolution. %A de Jong, Edwin D. %A Pollack, Jordan B. %J Evolutionary computation %D 2004 %V 12 %N 2 %@ 10636560 %G eng %F deJong2004 %X In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. %K Algorithms %K Biological Evolution %K Computational Biology %K Evaluation Studies as Topic %K Models, Theoretical %U http://dx.doi.org/10.1162/106365604773955139 %U http://www.ncbi.nlm.nih.gov/pubmed/15157373 %P 159192
PT Journal AU de Jong, ED Pollack, JB TI Ideal evaluation from coevolution. SO Evolutionary computation JI Evol Comput PY 2004 BP 159 EP 192 VL 12 IS 2 DI 10.1162/106365604773955139 LA eng DE Algorithms; Biological Evolution; Computational Biology; Evaluation Studies as Topic; Models, Theoretical AB In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. ER
PMID 15157373 OWN  NLM STAT MEDLINE DA  20040525 DCOM 20040720 LR  20101118 IS  10636560 (Print) IS  10636560 (Linking) VI  12 IP  2 DP  2004 Summer TI  Ideal evaluation from coevolution. PG  15992 AB  In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. CI  Copryright 2004 Massachusetts Institute of Technology FAU  de Jong, Edwin D AU  de Jong ED AD  DEMO Lab, Volen National Center for Complex Systems, Brandeis University MS018, 415 South Street, Waltham MA 024549110, USA. [email protected] FAU  Pollack, Jordan B AU  Pollack JB LA  eng PT  Journal Article PT  Research Support, NonU.S. Gov't PL  United States TA  Evol Comput JT  Evolutionary computation JID  9513581 SB  IM MH  *Algorithms MH  *Biological Evolution MH  *Computational Biology MH  *Evaluation Studies as Topic MH  *Models, Theoretical EDAT 2004/05/26 05:00 MHDA 2004/07/21 05:00 CRDT 2004/05/26 05:00 AID  10.1162/106365604773955139 [doi] PST  ppublish SO  Evol Comput. 2004 Summer;12(2):15992.
TY  JOUR AU  de Jong, Edwin D. AU  Pollack, Jordan B. PY  2004// TI  Ideal evaluation from coevolution. T2  Evol Comput JO  Evolutionary computation SP  159 EP  192 VL  12 IS  2 KW  Algorithms KW  Biological Evolution KW  Computational Biology KW  Evaluation Studies as Topic KW  Models, Theoretical N2  In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown. SN  10636560 UR  http://dx.doi.org/10.1162/106365604773955139 UR  http://www.ncbi.nlm.nih.gov/pubmed/15157373 ID  deJong2004 ER 
<?xml version="1.0" encoding="UTF8"?> <b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" > <b:Source> <b:Tag>deJong2004</b:Tag> <b:SourceType>ArticleInAPeriodical</b:SourceType> <b:Year>2004</b:Year> <b:PeriodicalName>Evolutionary computation</b:PeriodicalName> <b:Volume>12</b:Volume> <b:Issue>2</b:Issue> <b:Pages>159192</b:Pages> <b:Author> <b:Author><b:NameList> <b:Person><b:Last>de Jong</b:Last><b:First>Edwin</b:First><b:Middle>D</b:Middle></b:Person> <b:Person><b:Last>Pollack</b:Last><b:First>Jordan</b:First><b:Middle>B</b:Middle></b:Person> </b:NameList></b:Author> </b:Author> <b:Title>Ideal evaluation from coevolution.</b:Title> <b:ShortTitle>Evol Comput</b:ShortTitle> <b:Comments>In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown.</b:Comments> </b:Source> </b:Sources>