<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="article">
<front>
    <journal-meta>
        <journal-id journal-id-type="publisher-id">INFEDU</journal-id>
        <journal-title-group>
            <journal-title>Informatics in Education</journal-title>
        </journal-title-group>
        <issn pub-type="epub">1648-5831</issn>
        <issn pub-type="ppub">1648-5831</issn>
        <publisher>
            <publisher-name>VU</publisher-name>
        </publisher>
    </journal-meta>
    <article-meta>
                <article-id pub-id-type="publisher-id">INFE076</article-id>
                        <article-id pub-id-type="doi">10.15388/infedu.2006.05</article-id>
                        <article-categories>
            <subj-group subj-group-type="heading">
                <subject>Article</subject>
            </subj-group>
        </article-categories>
                        <title-group>
            <article-title>On the Suitability of Programming Tasks for Automated Evaluation</article-title>
        </title-group>
                        <contrib-group>
                                        <contrib contrib-type="author">
                                                <name>
                    <surname>FORISEK</surname>
                    <given-names>Michal</given-names>
                </name>
                                <email xlink:href="mailto:forisek@dcs.fmph.uniba.sk">forisek@dcs.fmph.uniba.sk</email>
                                                <xref ref-type="aff" rid="j_INFEDU_aff_000"/>
                                            </contrib>
                        <aff id="j_INFEDU_aff_000">Department of Informatics, Physics and Informatics Comenius University Mlynska dolina, 842 48 Bratislava, Slovakia</aff>
                                </contrib-group>
                                                                            <volume>5</volume>
                                <issue>1</issue>
                                    <fpage>63</fpage>
                        <lpage>76</lpage>
						<pub-date pub-type="epub">
                        <day>15</day>
                                    <month>04</month>
                        <year>2006</year>
        </pub-date>
                                                        <abstract>
                        <p>For many programming tasks we would be glad to have some kind of automatic evaluation process. As an example, most of the programming contests use an automatic evaluation of the contestants&#039; submissions. While this approach is clearly highly efficient, it also has some drawbacks. Often it is the case that the test inputs are not able to ``break&#039;&#039; all flawed submissions. In this article we show that the situation is not pleasant at all - for some programming tasks it is impossible to design good test inputs. Moreover, we discuss some ways how to recognize such tasks, and discuss other possibilities for doing the evaluation. The discussion is focused on programming contests, but the results can be applied for any programming tasks, e.g., assignments in school.</p>
                    </abstract>
                <kwd-group>
            <label>Keywords</label>
                        <kwd>programming contests</kwd>
                        <kwd>programming tasks</kwd>
                        <kwd>automated testing</kwd>
                        <kwd>IOI</kwd>
                        <kwd>black-box testing</kwd>
                        <kwd>task analysis</kwd>
                    </kwd-group>
    </article-meta>
</front>
</article>
