[prev in list] [next in list] [prev in thread] [next in thread]
List: solr-dev
Subject: [JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 462 - Still Unstable
From: Apache Jenkins Server <jenkins () builds ! apache ! org>
Date: 2019-08-31 20:14:27
Message-ID: 823337697.2042.1567282675468.JavaMail.jenkins () jenkins02
[Download RAW message or body]
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/462/
1 tests failed.
FAILED: org.apache.solr.cloud.TestConfigSetsAPI.testUserAndTestDefaultConfigsetsAreSame
Error Message:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema \
contents doesn't match expected \
(/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) \
expected:<... <tokenizer [name="whitespace"/> </analyzer> \
</fieldType> <!-- A general text field that has reasonable, generic \
cross-language defaults: it tokenizes with StandardTokenizer, removes stop \
words from case-insensitive "stopwords.txt" (empty by default), and down \
cases. At query time only, it also applies synonyms. --> <fieldType \
name="text_general" class="solr.TextField" positionIncrementGap="100" \
multiValued="true"> <analyzer type="index"> <tokenizer \
name="standard"/> <filter name="stop" ignoreCase="true" words="stopwords.txt" \
/> <!-- in this example, we will only use synonyms at query time \
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" \
expand="false"/> <filter name="flattenGraph"/> --> <filter \
name="lowercase"/> </analyzer> <analyzer type="query"> <tokenizer \
name="standard"/> <filter name="stop" ignoreCase="true" words="stopwords.txt" \
/> <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="lowercase"/> </analyzer> </fieldType> \
<!-- SortableTextField generaly functions exactly like TextField, except \
that it supports, and by default uses, docValues for sorting (or faceting) \
on the first 1024 characters of the original field values (which is configurable). \
This makes it a bit more useful then TextField in many situations, but the trade-off \
is that it takes up more space on disk; which is why it's not used in place of \
TextField for every fieldType in this _default schema. --> \
<dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" \
multiValued="false"/> <dynamicField name="*_txt_sort" type="text_gen_sort" \
indexed="true" stored="true"/> <fieldType name="text_gen_sort" \
class="solr.SortableTextField" positionIncrementGap="100" multiValued="true"> \
<analyzer type="index"> <tokenizer name="standard"/> <filter \
name="stop" ignoreCase="true" words="stopwords.txt" /> <filter \
name="lowercase"/> </analyzer> <analyzer type="query"> <tokenizer \
name="standard"/> <filter name="stop" ignoreCase="true" words="stopwords.txt" \
/> <filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="lowercase"/> </analyzer> </fieldType> \
<!-- A text field with defaults appropriate for English: it tokenizes with \
StandardTokenizer, removes English stop words (lang/stopwords_en.txt), down \
cases, protects words from protwords.txt, and finally applies Porter's \
stemming. The query time analyzer also applies synonyms from synonyms.txt. --> \
<dynamicField name="*_txt_en" type="text_en" indexed="true" stored="true"/> \
<fieldType name="text_en" class="solr.TextField" positionIncrementGap="100"> \
<analyzer type="index"> <tokenizer name="standard"/> <!-- in this \
example, we will only use synonyms at query time <filter name="synonymGraph" \
synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> <filter \
name="flattenGraph"/> --> <!-- Case insensitive stop word removal. \
--> <filter name="stop" ignoreCase="true" \
words="lang/stopwords_en.txt" /> <filter name="lowercase"/> \
<filter name="englishPossessive"/> <filter name="keywordMarker" \
protected="protwords.txt"/> <!-- Optionally you may want to use this less \
aggressive stemmer instead of PorterStemFilterFactory: <filter \
name="englishMinimalStem"/> --> <filter name="porterStem"/> \
</analyzer> <analyzer type="query"> <tokenizer name="standard"/> \
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> \
<filter name="stop" ignoreCase="true" \
words="lang/stopwords_en.txt" /> <filter name="lowercase"/> \
<filter name="englishPossessive"/> <filter name="keywordMarker" \
protected="protwords.txt"/> <!-- Optionally you may want to use this less \
aggressive stemmer instead of PorterStemFilterFactory: <filter \
name="englishMinimalStem"/> --> <filter name="porterStem"/> \
</analyzer> </fieldType> <!-- A text field with defaults appropriate for \
English, plus aggressive word-splitting and autophrase features enabled. \
This field is just like text_en, except it adds WordDelimiterGraphFilter to \
enable splitting and matching of words on case-change, alpha numeric \
boundaries, and non-alphanumeric chars. This means certain compound word \
cases will work, for example query "wi fi" will match document "WiFi" or \
"wi-fi". --> <dynamicField name="*_txt_en_split" type="text_en_splitting" \
indexed="true" stored="true"/> <fieldType name="text_en_splitting" \
class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> \
<analyzer type="index"> <tokenizer name="whitespace"/> <!-- in this \
example, we will only use synonyms at query time <filter name="synonymGraph" \
synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/> --> \
<!-- Case insensitive stop word removal. --> <filter name="stop" \
ignoreCase="true" words="lang/stopwords_en.txt" /> \
<filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" \
catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> \
<filter name="lowercase"/> <filter name="keywordMarker" \
protected="protwords.txt"/> <filter name="porterStem"/> <filter \
name="flattenGraph" /> </analyzer> <analyzer type="query"> \
<tokenizer name="whitespace"/> <filter name="synonymGraph" \
synonyms="synonyms.txt" ignoreCase="true" expand="true"/> <filter name="stop" \
ignoreCase="true" words="lang/stopwords_en.txt" /> \
<filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" \
catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/> \
<filter name="lowercase"/> <filter name="keywordMarker" \
protected="protwords.txt"/> <filter name="porterStem"/> </analyzer> \
</fieldType> <!-- Less flexible matching, but less false matches. Probably not \
ideal for product names, but may be good for SKUs. Can insert dashes in the \
wrong place and still match. --> <dynamicField name="*_txt_en_split_tight" \
type="text_en_splitting_tight" indexed="true" stored="true"/> <fieldType \
name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" \
autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer \
name="whitespace"/> <filter name="synonymGraph" synonyms="synonyms.txt" \
ignoreCase="true" expand="false"/> <filter name="stop" ignoreCase="true" \
words="lang/stopwords_en.txt"/> <filter name="wordDelimiterGraph" \
generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" \
catenateAll="0"/> <filter name="lowercase"/> <filter \
name="keywordMarker" protected="protwords.txt"/> <filter \
name="englishMinimalStem"/> <!-- this filter can remove any duplicate tokens \
that appear at the same position - sometimes possible with \
WordDelimiterGraphFilter in conjuncton with stemming. --> <filter \
name="removeDuplicates"/> <filter name="flattenGraph" /> </analyzer> \
<analyzer type="query"> <tokenizer name="whitespace"/> <filter \
name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false"/> \
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/> <filter \
name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" \
catenateWords="1" catenateNumbers="1" catenateAll="0"/> <filter \
name="lowercase"/> <filter name="keywordMarker" protected="protwords.txt"/> \
<filter name="englishMinimalStem"/> <!-- this filter can remove any duplicate \
tokens that appear at the same position - sometimes possible with \
WordDelimiterGraphFilter in conjuncton with stemming. --> <filter \
name="removeDuplicates"/> </analyzer> </fieldType> <!-- Just like \
text_general except it reverses the characters of each token, to enable more \
efficient leading wildcard queries. --> <dynamicField name="*_txt_rev" \
type="text_general_rev" indexed="true" stored="true"/> <fieldType \
name="text_general_rev" class="solr.TextField" positionIncrementGap="100"> \
<analyzer type="index"> <tokenizer name="standard"/> <filter \
name="stop" ignoreCase="true" words="stopwords.txt" /> <filter \
name="lowercase"/> <filter name="reversedWildcard" withOriginal="true" \
maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/> </analyzer> \
<analyzer type="query"> <tokenizer name="standard"/> <filter \
name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> \
<filter name="stop" ignoreCase="true" words="stopwords.txt" /> <filter \
name="lowercase"/> </analyzer> </fieldType> <dynamicField \
name="*_phon_en" type="phonetic_en" indexed="true" stored="true"/> <fieldType \
name="phonetic_en" stored="false" indexed="true" class="solr.TextField" > \
<analyzer> <tokenizer name="standard"/> <filter \
name="doubleMetaphone" inject="false"/> </analyzer> </fieldType> <!-- \
lowercases the entire field value, keeping it as a single token. --> \
<dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true"/> \
<fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100"> \
<analyzer> <tokenizer name="keyword"/> <filter name="lowercase" /> \
</analyzer> </fieldType> <!-- Example of using \
PathHierarchyTokenizerFactory at index time, so queries for paths match \
documents at that path, or in descendent paths --> <dynamicField \
name="*_descendent_path" type="descendent_path" indexed="true" stored="true"/> \
<fieldType name="descendent_path" class="solr.TextField"> <analyzer \
type="index"> <tokenizer name="pathHierarchy" delimiter="/" /> \
</analyzer> <analyzer type="query"> <tokenizer name="keyword" /> \
</analyzer> </fieldType> <!-- Example of using \
PathHierarchyTokenizerFactory at query time, so queries for paths match \
documents at that path, or in ancestor paths --> <dynamicField \
name="*_ancestor_path" type="ancestor_path" indexed="true" stored="true"/> \
<fieldType name="ancestor_path" class="solr.TextField"> <analyzer type="index"> \
<tokenizer name="keyword" /> </analyzer> <analyzer type="query"> \
<tokenizer name="pathHierarchy" delimiter="/" /> </analyzer> </fieldType> \
<!-- This point type indexes the coordinates as separate fields (subFields) If \
subFieldType is defined, it references a type, and a dynamic field definition \
is created matching *___<typename>. Alternately, if subFieldSuffix is \
defined, that is used to create the subFields. Example: if \
subFieldType="double", then the coordinates would be indexed in fields \
myloc_0___double,myloc_1___double. Example: if subFieldSuffix="_d" then the \
coordinates would be indexed in fields myloc_0_d,myloc_1_d The \
subFields are an implementation detail of the fieldType, and end users normally \
should not need to know about them. --> <dynamicField name="*_point" \
type="point" indexed="true" stored="true"/> <fieldType name="point" \
class="solr.PointType" dimension="2" subFieldSuffix="_d"/> <!-- A specialized \
field for geospatial search filters and distance sorting. --> <fieldType \
name="location" class="solr.LatLonPointSpatialField" docValues="true"/> <!-- A \
geospatial field type that supports multiValued and polygon shapes. For more \
information about this and other spatial fields see: \
http://lucene.apache.org/solr/guide/spatial-search.html --> <fieldType \
name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType" \
geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" /> \
<!-- Payloaded field types --> <fieldType name="delimited_payloads_float" \
stored="false" indexed="true" class="solr.TextField"> <analyzer> \
<tokenizer name="whitespace"/> <filter name="delimitedPayload" \
encoder="float"/> </analyzer> </fieldType> <fieldType \
name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField"> \
<analyzer> <tokenizer name="whitespace"/> <filter \
name="delimitedPayload" encoder="integer"/> </analyzer> </fieldType> \
<fieldType name="delimited_payloads_string" stored="false" indexed="true" \
class="solr.TextField"> <analyzer> <tokenizer name="whitespace"/> \
<filter name="delimitedPayload" encoder="identity"/> </analyzer> \
</fieldType> <!-- some examples for different languages (generally ordered by \
ISO code) --> <!-- Arabic --> <dynamicField name="*_txt_ar" type="text_ar" \
indexed="true" stored="true"/> <fieldType name="text_ar" class="solr.TextField" \
positionIncrementGap="100"> <analyzer> <tokenizer name="standard"/> \
<!-- for any non-arabic --> <filter name="lowercase"/> <filter \
name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" /> <!-- \
normalizes ﻯ to ﻱ, etc --> <filter name="arabicNormalization"/> \
<filter name="arabicStem"/> </analyzer> </fieldType> <!-- Bulgarian \
--> <dynamicField name="*_txt_bg" type="text_bg" indexed="true" stored="true"/> \
<fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100"> \
<analyzer> <tokenizer name="standard"/> <filter name="lowercase"/> \
<filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" /> \
<filter name="bulgarianStem"/> </analyzer> </fieldType> <!-- \
Catalan --> <dynamicField name="*_txt_ca" type="text_ca" indexed="true" \
stored="true"/> <fieldType name="text_ca" class="solr.TextField" \
positionIncrementGap="100"> <analyzer> <tokenizer name="standard"/> \
<!-- removes l', etc --> <filter name="elision" ignoreCase="true" \
articles="lang/contractions_ca.txt"/> <filter name="lowercase"/> \
<filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" /> \
<filter name="snowballPorter" language="Catalan"/> </analyzer> </fieldType> \
<!-- CJK bigram (see text_ja for a Japanese configuration using morphological \
analysis) --> <dynamicField name="*_txt_cjk" type="text_cjk" indexed="true" \
stored="true"/> <fieldType name="text_cjk" class="solr.TextField" \
positionIncrementGap="100"> <analyzer> <tokenizer name="standard"/> \
<!-- normalize width before bigram, as e.g. half-width dakuten combine --> \
<filter name="CJKWidth"/> <!-- for any non-CJK --> <filter \
name="lowercase"/> <filter name="CJKBigram"/> </analyzer> \
</fieldType> <!-- Czech --> <dynamicField name="*_txt_cz" type="text_cz" \
indexed="true" stored="true"/> <fieldType name="text_cz" class="solr.TextField" \
positionIncrementGap="100"> <analyzer> <tokenizer name="standard"/> \
Stack Trace:
org.junit.ComparisonFailure: \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/core/src/test-files/solr/configsets/_default/conf/managed-schema \
contents doesn't match expected \
(/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/server/solr/configsets/_default/conf/managed-schema) \
expected:<... <tokenizer [name="whitespace"/>
</analyzer>
</fieldType>
<!-- A general text field that has reasonable, generic
cross-language defaults: it tokenizes with StandardTokenizer,
removes stop words from case-insensitive "stopwords.txt"
(empty by default), and down cases. At query time only, it
also applies synonyms.
-->
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" \
multiValued="true"> <analyzer type="index">
<tokenizer name="standard"/>
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" \
expand="false"/> <filter name="flattenGraph"/>
-->
<filter name="lowercase"/>
</analyzer>
<analyzer type="query">
<tokenizer name="standard"/>
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="lowercase"/>
</analyzer>
</fieldType>
<!-- SortableTextField generaly functions exactly like TextField,
except that it supports, and by default uses, docValues for sorting (or \
faceting)
on the first 1024 characters of the original field values (which is \
configurable).
This makes it a bit more useful then TextField in many situations, but the \
trade-off
is that it takes up more space on disk; which is why it's not used in place \
of TextField for every fieldType in this _default schema.
-->
<dynamicField name="*_t_sort" type="text_gen_sort" indexed="true" stored="true" \
multiValued="false"/> <dynamicField name="*_txt_sort" type="text_gen_sort" \
indexed="true" stored="true"/> <fieldType name="text_gen_sort" \
class="solr.SortableTextField" positionIncrementGap="100" multiValued="true"> \
<analyzer type="index"> <tokenizer name="standard"/>
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="lowercase"/>
</analyzer>
<analyzer type="query">
<tokenizer name="standard"/>
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="lowercase"/>
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English: it tokenizes with \
StandardTokenizer,
removes English stop words (lang/stopwords_en.txt), down cases, protects \
words from protwords.txt, and
finally applies Porter's stemming. The query time analyzer also applies \
synonyms from synonyms.txt. --> <dynamicField name="*_txt_en" type="text_en" \
indexed="true" stored="true"/> <fieldType name="text_en" class="solr.TextField" \
positionIncrementGap="100"> <analyzer type="index">
<tokenizer name="standard"/>
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" \
expand="false"/> <filter name="flattenGraph"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter name="stop"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter name="lowercase"/>
<filter name="englishPossessive"/>
<filter name="keywordMarker" protected="protwords.txt"/>
<!-- Optionally you may want to use this less aggressive stemmer instead of \
PorterStemFilterFactory: <filter name="englishMinimalStem"/>
-->
<filter name="porterStem"/>
</analyzer>
<analyzer type="query">
<tokenizer name="standard"/>
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="stop"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter name="lowercase"/>
<filter name="englishPossessive"/>
<filter name="keywordMarker" protected="protwords.txt"/>
<!-- Optionally you may want to use this less aggressive stemmer instead of \
PorterStemFilterFactory: <filter name="englishMinimalStem"/>
-->
<filter name="porterStem"/>
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English, plus
aggressive word-splitting and autophrase features enabled.
This field is just like text_en, except it adds
WordDelimiterGraphFilter to enable splitting and matching of
words on case-change, alpha numeric boundaries, and
non-alphanumeric chars. This means certain compound word
cases will work, for example query "wi fi" will match
document "WiFi" or "wi-fi".
-->
<dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" \
stored="true"/> <fieldType name="text_en_splitting" class="solr.TextField" \
positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index">
<tokenizer name="whitespace"/>
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" \
expand="false"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter name="stop"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter name="wordDelimiterGraph" generateWordParts="1" \
generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" \
splitOnCaseChange="1"/> <filter name="lowercase"/>
<filter name="keywordMarker" protected="protwords.txt"/>
<filter name="porterStem"/>
<filter name="flattenGraph" />
</analyzer>
<analyzer type="query">
<tokenizer name="whitespace"/>
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="stop"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter name="wordDelimiterGraph" generateWordParts="1" \
generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" \
splitOnCaseChange="1"/> <filter name="lowercase"/>
<filter name="keywordMarker" protected="protwords.txt"/>
<filter name="porterStem"/>
</analyzer>
</fieldType>
<!-- Less flexible matching, but less false matches. Probably not ideal for \
product names,
but may be good for SKUs. Can insert dashes in the wrong place and still \
match. --> <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" \
indexed="true" stored="true"/> <fieldType name="text_en_splitting_tight" \
class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> \
<analyzer type="index"> <tokenizer name="whitespace"/>
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="false"/>
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
<filter name="wordDelimiterGraph" generateWordParts="0" \
generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> \
<filter name="lowercase"/> <filter name="keywordMarker" protected="protwords.txt"/>
<filter name="englishMinimalStem"/>
<!-- this filter can remove any duplicate tokens that appear at the same \
position - sometimes
possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
<filter name="removeDuplicates"/>
<filter name="flattenGraph" />
</analyzer>
<analyzer type="query">
<tokenizer name="whitespace"/>
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="false"/>
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt"/>
<filter name="wordDelimiterGraph" generateWordParts="0" \
generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/> \
<filter name="lowercase"/> <filter name="keywordMarker" protected="protwords.txt"/>
<filter name="englishMinimalStem"/>
<!-- this filter can remove any duplicate tokens that appear at the same \
position - sometimes
possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
<filter name="removeDuplicates"/>
</analyzer>
</fieldType>
<!-- Just like text_general except it reverses the characters of
each token, to enable more efficient leading wildcard queries.
-->
<dynamicField name="*_txt_rev" type="text_general_rev" indexed="true" \
stored="true"/> <fieldType name="text_general_rev" class="solr.TextField" \
positionIncrementGap="100"> <analyzer type="index">
<tokenizer name="standard"/>
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="lowercase"/>
<filter name="reversedWildcard" withOriginal="true"
maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
</analyzer>
<analyzer type="query">
<tokenizer name="standard"/>
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" \
expand="true"/> <filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="lowercase"/>
</analyzer>
</fieldType>
<dynamicField name="*_phon_en" type="phonetic_en" indexed="true" \
stored="true"/> <fieldType name="phonetic_en" stored="false" indexed="true" \
class="solr.TextField" > <analyzer>
<tokenizer name="standard"/>
<filter name="doubleMetaphone" inject="false"/>
</analyzer>
</fieldType>
<!-- lowercases the entire field value, keeping it as a single token. -->
<dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true"/>
<fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer name="keyword"/>
<filter name="lowercase" />
</analyzer>
</fieldType>
<!--
Example of using PathHierarchyTokenizerFactory at index time, so
queries for paths match documents at that path, or in descendent paths
-->
<dynamicField name="*_descendent_path" type="descendent_path" indexed="true" \
stored="true"/> <fieldType name="descendent_path" class="solr.TextField">
<analyzer type="index">
<tokenizer name="pathHierarchy" delimiter="/" />
</analyzer>
<analyzer type="query">
<tokenizer name="keyword" />
</analyzer>
</fieldType>
<!--
Example of using PathHierarchyTokenizerFactory at query time, so
queries for paths match documents at that path, or in ancestor paths
-->
<dynamicField name="*_ancestor_path" type="ancestor_path" indexed="true" \
stored="true"/> <fieldType name="ancestor_path" class="solr.TextField">
<analyzer type="index">
<tokenizer name="keyword" />
</analyzer>
<analyzer type="query">
<tokenizer name="pathHierarchy" delimiter="/" />
</analyzer>
</fieldType>
<!-- This point type indexes the coordinates as separate fields (subFields)
If subFieldType is defined, it references a type, and a dynamic field
definition is created matching *___<typename>. Alternately, if
subFieldSuffix is defined, that is used to create the subFields.
Example: if subFieldType="double", then the coordinates would be
indexed in fields myloc_0___double,myloc_1___double.
Example: if subFieldSuffix="_d" then the coordinates would be indexed
in fields myloc_0_d,myloc_1_d
The subFields are an implementation detail of the fieldType, and end
users normally should not need to know about them.
-->
<dynamicField name="*_point" type="point" indexed="true" stored="true"/>
<fieldType name="point" class="solr.PointType" dimension="2" \
subFieldSuffix="_d"/>
<!-- A specialized field for geospatial search filters and distance sorting. -->
<fieldType name="location" class="solr.LatLonPointSpatialField" \
docValues="true"/>
<!-- A geospatial field type that supports multiValued and polygon shapes.
For more information about this and other spatial fields see:
http://lucene.apache.org/solr/guide/spatial-search.html
-->
<fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
geo="true" distErrPct="0.025" maxDistErr="0.001" \
distanceUnits="kilometers" />
<!-- Payloaded field types -->
<fieldType name="delimited_payloads_float" stored="false" indexed="true" \
class="solr.TextField"> <analyzer>
<tokenizer name="whitespace"/>
<filter name="delimitedPayload" encoder="float"/>
</analyzer>
</fieldType>
<fieldType name="delimited_payloads_int" stored="false" indexed="true" \
class="solr.TextField"> <analyzer>
<tokenizer name="whitespace"/>
<filter name="delimitedPayload" encoder="integer"/>
</analyzer>
</fieldType>
<fieldType name="delimited_payloads_string" stored="false" indexed="true" \
class="solr.TextField"> <analyzer>
<tokenizer name="whitespace"/>
<filter name="delimitedPayload" encoder="identity"/>
</analyzer>
</fieldType>
<!-- some examples for different languages (generally ordered by ISO code) -->
<!-- Arabic -->
<dynamicField name="*_txt_ar" type="text_ar" indexed="true" stored="true"/>
<fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer name="standard"/>
<!-- for any non-arabic -->
<filter name="lowercase"/>
<filter name="stop" ignoreCase="true" words="lang/stopwords_ar.txt" />
<!-- normalizes ﻯ to ﻱ, etc -->
<filter name="arabicNormalization"/>
<filter name="arabicStem"/>
</analyzer>
</fieldType>
<!-- Bulgarian -->
<dynamicField name="*_txt_bg" type="text_bg" indexed="true" stored="true"/>
<fieldType name="text_bg" class="solr.TextField" positionIncrement
[...truncated too long message...]
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.
-ivy-fail-disallowed-ivy-version:
ivy-fail:
ivy-fail:
ivy-configure:
[ivy:configure] :: loading settings :: file = \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/top-level-ivy-settings.xml
resolve:
jar-checksums:
[mkdir] Created dir: \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918
[copy] Copying 249 files to \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918
[delete] Deleting directory \
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/null1690707918
check-working-copy:
[ivy:cachepath] :: resolving dependencies :: #;working@lucene2-us-west.apache.org
[ivy:cachepath] confs: [default]
[ivy:cachepath] found org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in \
public [ivy:cachepath] found com.jcraft#jsch;0.1.54 in public
[ivy:cachepath] found com.jcraft#jzlib;1.1.1 in public
[ivy:cachepath] found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath] found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] found org.bouncycastle#bcpg-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcprov-jdk15on;1.60 in public
[ivy:cachepath] found org.bouncycastle#bcpkix-jdk15on;1.60 in public
[ivy:cachepath] found org.slf4j#slf4j-nop;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 77ms :: artifacts dl 13ms
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 9 | 0 | 0 | 0 || 9 | 0 |
---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] Checking working copy status...
-jenkins-base:
BUILD SUCCESSFUL
Total time: 186 minutes 35 seconds
Archiving artifacts
java.lang.InterruptedException: no matches found within 10000
at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847)
at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726)
at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene2
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1835)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.FilePath$TunneledInterruptedException
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused: java.lang.InterruptedException: java.lang.InterruptedException: no matches \
found within 10000 at hudson.FilePath.act(FilePath.java:1074)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1835)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
No artifacts found that match the file pattern \
"**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error? Recording test \
results Build step 'Publish JUnit test result report' changed build result to \
UNSTABLE Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic