Elastic search case insensitive

Elastic search case insensitive

I have the following annotation based elastic search configuration, I’ve set the index not to be analyzed because I don’t want these fields to be tokenized:
@Document(indexName = “abc”, type = “efg”)
public class ResourceElasticSearch {
@Id
private String id;
@Field(type = FieldType.String, index = FieldIndex.not_analyzed)
private String name;
@Field(type = FieldType.String, store = true)
private List tags = new ArrayList<>();
@Field(type = FieldType.String)
private String clientId;
@Field(type = FieldType.String, index = FieldIndex.not_analyzed)
private String virtualPath;
@Field(type = FieldType.Date)
private Date lastModifiedTime;
@Field(type = FieldType.Date)
private Date lastQueryTime;
@Field(type = FieldType.String)
private String modificationId;
@Field(type = FieldType.String)
private String realPath;
@Field(type = FieldType.String)
private String extension;
@Field(type = FieldType.String)
private ResourceType type;

Is it possible by using annotations to make the searches on the name, virtualPath and tags to be case-insensitive?
The search looks like this, search by wildcard is required:
private QueryBuilder getQueryBuilderForSearch(SearchCriteria criteria) {
String virtualPath = criteria.getPath();

return boolQuery()
.must(wildcardQuery(“virtualPath”, virtualPath))
.must(wildcardQuery(“name”, criteria.getName()));
}

Solutions/Answers:

Solution 1:

Not really possible what you want to do and it’s not about Spring Data configuration, it’s about Elasticsearch itself: you indexed data as not_analyzed and it will stay that way.

Also, if you wanted case insensitive data I suggest indexing with keyword analyzer combined with a lowercase token filter.

Solution 2:

I’ve found something based on Andrei Stefan’s suggestion which has a similar result to using the annotations:

    @Bean
    public Client client() throws IOException {
    TransportClient client = new TransportClient();
    TransportAddress address = new InetSocketTransportAddress(env.getProperty("elasticsearch.host"), Integer.parseInt(env.getProperty("elasticsearch.port")));
    client.addTransportAddress(address);

    XContentBuilder settingsBuilder = XContentFactory.jsonBuilder()
            .startObject()
            .startObject("analysis")
            .startObject("analyzer")
            .startObject("keyword")
            .field("tokenizer", "keyword")
            .array("filter", "lowercase")
            .endObject()
            .endObject()
            .endObject()
            .endObject();
    if (!client.admin().indices().prepareExists("abc").execute().actionGet().isExists()) {
        client.admin().indices().prepareCreate("abc").setSettings(settingsBuilder).get();
    }
       return client;
    }

References

Stream data to amazon elasticsearch using logstash?

Stream data to amazon elasticsearch using logstash?

So I spinned up a 2 instance Amazon Elasticsearch cluster.
I have installed the logstash-output-amazon_es plugin. This is my logstash configuration file :
input {
file {
path => “/Users/user/Desktop/user/logs/*”
}
}

filter {
grok {
match => {
“message” => ‘%{COMMONAPACHELOG} %{QS}%{QS}’
}
}

date {
match => [ “timestamp”, “dd/MMM/YYYY:HH:mm:ss Z” ]
locale => en
}

useragent {
source => “agent”
target => “useragent”
}
}

output {
amazon_es {
hosts => [“foo.us-east-1.es.amazonaws.com”]
region => “us-east-1”
index => “apache_elk_example”
template => “./apache_template.json”
template_name => “apache_elk_example”
template_overwrite => true
}
}

Now I am running this from my terminal:
/usr/local/opt/logstash/bin/logstash -f apache_logstash.conf

I get the error:
Failed to install template: undefined method `credentials’ for nil:NilClass {:level=>:error}

I think I have got something completely wrong. Basically I just want to feed some dummy log inputs to my amazon elasticsearch cluster through logstash. How should I proceed?
Edit Storage type is Instance and access policy is set to accessible to all.
Edit
output {
elasticsearch {
hosts => [“foo.us-east-1.es.amazonaws.com”]
ssl => true
index => “apache_elk_example”
template => “./apache_template.json”
template_name => “apache_elk_example”
template_overwrite => true

}
}

Solutions/Answers:

Solution 1:

You need to provide the following two parameters:

  • aws_access_key_id and
  • aws_secret_access_key

Even though they are described as optional parameters, there is one comment in the code that makes it clear.

aws_access_key_id and aws_secret_access_key are currently needed for this >plugin to work right. Subsequent versions will have the credential resolution logic as follows:

Solution 2:

I also faced same problem, and I solved it by mentioning port after the hostname.
This occurs because hostname hosts => ["foo.us-east-1.es.amazonaws.com"] points to foo.us-east-1.es.amazonaws.com:9200 which is not the default port in the case of aws elasticsearch. So by changing hostname to foo.us-east-1.es.amazonaws.com:80 solves the problem.

Solution 3:

I was able to run logstash together with AWS Elasticsearch without the AccessKeys, I configured the policie in the ES service.

It worked without the Keys if you start the logstash manually, if you start logstash as a service the plugin doenst work.

https://github.com/awslabs/logstash-output-amazon_es/issues/34

References

Elasticsearch terms aggregation on a not analyzed field with filters

Elasticsearch terms aggregation on a not analyzed field with filters

I have a not analyzed field on my index:
“city”: { “type”: “string”, “index”: “not_analyzed” }

I have an aggregation like the following:
“aggs”: {
“city”: {
“terms”: {
“field”: “city”
}
}
}

that gives me an output like this:
“aggregations”: {
“city”: {
“doc_count_error_upper_bound”: 51,
“sum_other_doc_count”: 12478,
“buckets”: [
{
“key”: “New York”,
“doc_count”: 28420
},
{
“key”: “London”,
“doc_count”: 23456
},
{
“key”: “São Paulo”,
“doc_count”: 12727
}
]
}
}

I need to add a match_phrase_prefix query before processing the aggregation to filter my results based on a user text, like this:
{
“size”: 0,
“query”: {
“match_phrase_prefix”: {
“city”: “sao”
}
},
“aggs”: {
“city”: {
“terms”: {
“field”: “city”
}
}
}
}

and the result is… nothing!
“aggregations”: {
“city”: {
“doc_count_error_upper_bound”: 0,
“sum_other_doc_count”: 0,
“buckets”: []
}
}

I was expecting an aggregation result on São Paulo city. Obviously the problem is that my field should have lowercase and asciifolding filters to have a match (São/sao), but I can’t make my field analyzed because I don’t want to have aggregation results like São, Paulo, New, York (that’s what happens on analyzed fields).
What can I do? I tried a lot of combinations with mapping/query/aggs but I can’t get it to work.
Any help will be appreciated.

Solutions/Answers:

Solution 1:

Since it is not_analyzed the query terms are case-sensitive.
You could use multi-field mapping on city with analyzed and non-analyzed fields.

Example:

put <index>/<type>/_mapping
{
   "properties": {
      "city": {
         "type": "string",
         "fields": {
            "raw": {
               "type": "string",
               "index": "not_analyzed"
            }
         }
      }
   }
}

post <index>/<type>/_search
{
    "size": 0,
    "query": {
        "match_phrase_prefix": {
            "city": "Sao"
        }
    },
    "aggs": {
        "city": {
                "terms": {
                    "field": "city.raw"
                }
            }
    }
}

References

Elasticsearch SearchContextMissingException during ‘scan & scroll’ query with Spring Data Elasticsearch

Elasticsearch SearchContextMissingException during ‘scan & scroll’ query with Spring Data Elasticsearch

I am using elasticsearch 2.2.0 with default cluster configuration. I encounter a problem with scan and scroll query using spring data elasticsearch. When I execute the query I get error like this:
[2016-06-29 12:45:52,046][DEBUG][action.search.type ] [Vector] [155597] Failed to execute query phase
RemoteTransportException[[Vector][10.132.47.95:9300][indices:data/read/search[phase/scan/scroll]]]; nested: SearchContextMissingException[No search context found for id [155597]];
Caused by: SearchContextMissingException[No search context found for id [155597]]
at org.elasticsearch.search.SearchService.findContext(SearchService.java:611)
at org.elasticsearch.search.SearchService.executeScan(SearchService.java:311)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(SearchServiceTransportAction.java:433)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(SearchServiceTransportAction.java:430)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

My ‘scan & scroll’ code:
public List getAllElements(SearchQuery searchQuery) {
searchQuery.setPageable(new PageRequest(0, PAGE_SIZE));
String scrollId = elasticsearchTemplate.scan(searchQuery, 1000, false);
List allElements = new LinkedList<>();
boolean hasRecords = true;
while (hasRecords) {
Page page = elasticsearchTemplate.scroll(scrollId, 5000, resultMapper);
if (page.hasContent()) {
allElements.addAll(page.getContent());
} else {
hasRecords = false;
}
}
elasticsearchTemplate.clearScroll(scrollId);
return allElements;
}

When my query result size is less than PAGE_SIZE parameter, then error like this occurs five times. I guess that it is one per shard. When result size is bigger than PAGE_SIZE, the error occurs few times more. I’ve tried to refactor my code to not call:
Page page = elasticsearchTemplate.scroll(scrollId, 5000, resultMapper);

when I’m sure that the page has no content. But it works only if PAGE_SIZE is bigger than query result, so it is no the solution at all.
I have to add that it is problem only on elasticsearch side. On the client side the errors is hidden and in each case the query result is correct. Has anybody knows what causes this issue?
Thank you for help,
Simon.

Solutions/Answers:

Solution 1:

I ran into a similar problem and I suspect that Spring Data Elasticsearch has some internal bug about passing the Scroll-ID. In my case I just tried to scroll through the whole index and I can rule out @Val his answer about “This usually happens if your search context is not alive anymore”, because the exceptions occurred regardless of the duration. Also the exceptions started after the first Page and occurred for every other paging query.

In my case I could simply use elasticsearchTemplate.stream(). It uses Scroll & Scan internally and seems to pass the Scroll-ID correctly. Oh, and it’s simpler to use:

SearchQuery searchQuery = new NativeSearchQueryBuilder()
    .withQuery(QueryBuilders.matchAllQuery())
    .withPageable(new PageRequest(0, 10000))
    .build();

Iterator<Post> postIterator = elasticsearchTemplate.stream(searchQuery, Post.class);

while(postIterator.hasNext()) {
    Post post = postIterator.next();
}

Solution 2:

I get this error if the ElasticSearch system closes the connection. Typically it’s exactly what @Val said – dead connections. Things sometimes die in ES for no good reason – master node down, data node is too congested, bad performing queries, Kibana running at the same time you are in middle of querying…I’ve been hit by all of these at one time or another to get this error.

Suggestion: Up the initial connection time – 1000L might be too short for it to get what it needs. It won’t hurt if the query ends sooner.

This also happens randomly when I try to pull too much data quickly; you might have huge documents and trying to pull PAGESIZE of 50,000 might be a little too much. We don’t know what you chose for PAGESIZE.

Suggestion: Lower PAGESIZE to something like 500. Or 20. See if these smaller values slow down the errors.

I know I have less of these problems after moving to ES 2.3.3.

Solution 3:

This usually happens if your search context is not alive anymore.

In your case, you’re starting your scan with a timeout of 1 second and then each scan is alive during 5 seconds. It’s probably too low. The default duration to keep the search context alive is 1 minute, so you should probably increase it to 60 seconds like this:

String scrollId = elasticsearchTemplate.scan(searchQuery, 60000, false);
...
Page<T> page = elasticsearchTemplate.scroll(scrollId, 60000, resultMapper);

References

Spring Elasticsearch HashMap[String, String] mapping value cannot be not_analyzed

Spring Elasticsearch HashMap[String, String] mapping value cannot be not_analyzed

Actually my question is very simple: I want my hashmap value not_analyzed!
Now i have one object contains a hashmap[string, string], looks like:
class SomeObject{
String id;
@Field(type=FieldType.Object, index=FieldIndex.not_analyzed)
Map parameters;
}

then Spring data elasticsearch generate the mapping like this at the beggining:
{
“id”: {
“type”: “string”
},
“parameters”: {
“type”: “object”
}
}

then after i add some objects to the es, it add more attributes like this:
{
“id”: {
“type”: “string”
},
“parameters”: {
“properties”: {
“shiduan”: {
“type”: “string”
},
“季节”: {
“type”: “string”
}
}
}
}

now, because of the parameters’s value is analyzed, so cannot be search by es, i mean cannot search chinese value, i have tried i can search english at this time.
THEN, AFTER READ THIS POST https://stackoverflow.com/a/32044370/4148034, I UPDATE THE MAPPING MANUALLY TO THIS:
{
“id”: {
“type”: “string”
},
“parameters”: {
“properties”: {
“shiduan”: {
“type”: “string”,
“index”: “not_analyzed”
},
“季节”: {
“type”: “string”,
“index”: “not_analyzed”
}
}
}
}

I CAN SEARCH CHINESE NOW, SO I KNOW THE PROBLEM IS “not_analyzed”, LIKE THE POST SAID.
Finally, anyone can tell me how to make the map value “not_analyzed”, i had google and stackoverflow many many times still cannot find the answer, let me know if someone can help, thanks very much.

Solutions/Answers:

Solution 1:

One way to achieve this is to create a mappings.json file on your build path (e.g. yourproject/src/main/resources/mappings) and then reference that mapping using the @Mapping annotation in your class.

@Document(indexName = "your_index", type = "your_type")
@Mapping(mappingPath = "/mappings/mappings.json")
public class SomeObject{
    String id;
    @Field(type=FieldType.Object, index=FieldIndex.not_analyzed)
    Map<String, String> parameters;
}

In that mapping file, we’re going to add a dynamic template which will target the subfields of your parameters hashmap and declare them to be not_analyzed string.

{
  "mappings": {
    "your_type": {
      "dynamic_templates": [
        {
          "strings": {
            "match_mapping_type": "string",
            "path_match": "parameters.*",
            "mapping": {
              "type": "string",
              "index": "not_analyzed"
            }
          }
        }
      ]
    }
  }
}

You need to make sure to delete your_index first and then restart your application so it can be recreated with the proper mapping.

References

How to find fields with mapping conflicts

How to find fields with mapping conflicts

My index settings in Kibana tell me that I have fields with mapping conflicts in my logstash-* index patterns.
What is the easiest way to find out which fields have a conflicting mapping and/or in which indices the conflict occurs?

Solutions/Answers:

Solution 1:

As of at least Kibana 5.2, you can type “conflict” into the Filter field, which will filter all fields down to only those which have a conflict. At the far right there is a column named “controls”, and for each field it has a button with a pencil icon. Clicking that will tell you which indices have which mapping.

Fields filtered to only those with conflicts:
fields filtered to only those with conflicts

Indices in which field mapping conflicts:
indices in which field mapping conflicts

Solution 2:

It should be easy to spot those in the list of fields, when defining the pattern. Something like this:

enter image description here

Solution 3:

In Elasticsearch 5.5.2, you can click on the dropdown on the right of the Filter search box and select “conflict”. This is in the Index Patterns page.

Solution 4:

Since I couldn’t locate the mapping conflict in the gui. I went down the hard path analysed my config for missing/conflicting field type found the offender and reindexed my data.

References

Accessing kibana on local network

Accessing kibana on local network

I want to access kibana running on my local system to be accessed by local_ip:5601 on other systems in my local network. I tried adding these two lines in elastic search:
http.cors.allow-origin: “*”
http.cors.enabled: true

But, it didn’t work either.

Solutions/Answers:

Solution 1:

On your kibana.yml look for the line #server.host: "0.0.0.0". It will probably be commented (#). You must remove the “#” from the line and restart your kibana service. It should allow you to access kibana from your local network ip e.g. “192.168.10.20” and make it discoverable by your other systems.
On that same file kibana.yml you will find an url that points to “http://localhost:9200” by default. If your elasticsearch instance is hosted in any different url than that, you must specify to kibana config file.

You can find more information about it here

Solution 2:

See this related question:
vagrants-port-forwarding-not-working

I was working with Kibana in a Centos 7 Vagrant VM.
I was not able to access the Kibana webui from the Host computer.

Stopping firewalld and disabling SELinux did not do the trick.

My VM ip address was 192.168.2.2, so I tested with curl http://92.168.2.2:5601/ and it would work from within the VM, but not from the Host CLI.

I tested that port forwarding was working by installing Apache in the VM and could access it from the Host browser with http://localhost:80, so port forwarding was not the problem.

My problem was the server.host parameter in the kibana.yml configuration file, which I had set to the ip address of the VM.

I changed it from this:

server.host: "192.168.2.2"

to this:

server.host: "0.0.0.0"

restarted kibana and could access the webui from the Host.

Solution 3:

This is how I got it to work:

Vagrantfile:

config.vm.network "forwarded_port", guest: 5601, host: 5602

httpd.conf:

Listen 5602
<VirtualHost *:5602>
ProxyPreserveHost On
ProxyRequests Off
ServerName kibana.mydomain.dev
ProxyPass / http://127.0.0.1:5601/
ProxyPassReverse / http://127.0.0.1:5601/
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
</VirtualHost>

References

Creating custom elasticsearch index with logstash

Creating custom elasticsearch index with logstash

I have to create custom index in elasticsearch using logstash. I have created new template in elasticsearch, and in logstash configuration i have specify template path,template_name and template_overwrite value,but still whenever I run logstash, new index is generated with logstash-dd-mm-yy regex,not with template_name specified in properties,
logstash -config file is
input {
file {
path => “/temp/file.txt”
type => “words”
start_position => “beginning”
}
}
filter {

mutate {
add_field => {“words” => “%{message}”}
}

}
output {
elasticsearch {
hosts => [“elasticserver:9200”]
template => “pathtotemplate.json”
template_name => “newIndexName-*”
template_overwrite => true
}
stdout{}
}

Index template file is
{
“template”: “dictinary-*”,
“settings” : {
“number_of_shards” : 1,
“number_of_replicas” : 0,
“index” : {
“query” : { “default_field” : “@words” },
“store” : { “compress” : { “stored” : true, “tv”: true } }
}
},
“mappings”: {
“_default_”: {
“_all”: { “enabled”: false },
“_source”: { “compress”: true },
“dynamic_templates”: [
{
“string_template” : {
“match” : “*”,
“mapping”: { “type”: “string”, “index”: “not_analyzed” },
“match_mapping_type” : “string”
}
}
],
“properties” : {
“@fields”: { “type”: “object”, “dynamic”: true, “path”: “full” },
“@words” : { “type” : “string”, “index” : “analyzed” },
“@source” : { “type” : “string”, “index” : “not_analyzed” },
“@source_host” : { “type” : “string”, “index” : “not_analyzed” },
“@source_path” : { “type” : “string”, “index” : “not_analyzed” },
“@tags”: { “type”: “string”, “index” : “not_analyzed” },
“@timestamp” : { “type” : “date”, “index” : “not_analyzed” },
“@type” : { “type” : “string”, “index” : “not_analyzed” }
}
}
}
}

Please help

Solutions/Answers:

Solution 1:

To do what you want, you have to set the index parameter in the Elasticsearch output block. Your output block will look like this:

output {
    elasticsearch {
     hosts => ["elasticserver:9200"]
     index => "newIndexName-%{+YYYY.MM.dd}"
     template => "pathtotemplate.json"
     template_name => "newIndexName-*"
     template_overwrite => true
    }
    stdout{}
}

References

nodejs SyntaxError: Unexpected token

nodejs SyntaxError: Unexpected token

I am using elasticsearch-exporter to export data from Elasticsearch.
The tool initially is a nodejs application.
When I try to use the following command node exporter.js to make the tool list all the available options, it crashes with the following exception
/home/me/storage/Elasticsearch-Exporter/log.js:54
exports.error = (…args) => !capture(“ERROR”, args) && console.log(timestamp() + util.format(…args).red);
^^^

SyntaxError: Unexpected token …
at exports.runInThisContext (vm.js:53:16)
at Module._compile (module.js:374:25)
at Object.Module._extensions..js (module.js:417:10)
at Module.load (module.js:344:32)
at Function.Module._load (module.js:301:12)
at Module.require (module.js:354:17)
at require (internal/module.js:12:17)
at Object. (/home/anas/storage/Elasticsearch-Exporter/exporter.js:9:11)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:417:10)

here is the line where the exception is thrown
exports.error = (…args) => !capture(“ERROR”, args) && console.log(timestamp() + util.format(…args).red);

I think the error is related to a different version of nodejs but I am not sure.
Here is the output of the node –version command v4.2.6
Here is the output of the npm –version command 3.10.6

Solutions/Answers:

Solution 1:

Yes, indeed, ... is called the spread operator and is only available since Node.js 6

The elasticsearch-exporter project declares in its package.json file that it only works with node version > 6

So since you’re running Node.js 4.2.6, you either need to upgrade your Node.js installation or fork the elasticsearch-exporter project and modify it to work with Node.js 4.2.6.

References

Matching arrays in elastic search

Matching arrays in elastic search

I have document as below:
{
“_index”: “abc_local”,
“_type”: “users”,
“_id”: “1”,
“_version”: 5,
“found”: true,
“_source”: {
“firstname”: “simer”,
“lastname”: “kaur”,
“gender”: “1”,
“Address”: “Punjab House Fed. Housing Society, Amritsar, Punjab, India”,
“email”: “rav@yopmail.com”,
“occupation”: “Php Developer”,
“work”: “Development”,
“fav_hunting_land”: 2,
“zipcode”: “”,
“marital_status”: “1”,
“phone”: “1234567899”,
“school”: “sdfergdfh”,
“species”: [{
“id”: 1
}, {
“id”: 2
}, {
“id”: 3
}, {
“id”: 4
}, {
“id”: 5
}, {
“id”: 6
}],
“activities”: [{
“id”: 1
}],
“fav_weapon”: 6,
“weapons”: [{
“id”: 1
}, {
“id”: 2
}, {
“id”: 3
}, {
“id”: 6
}],
“properties”: [{
“id”: 4
}]
}
}

and I need to match user on basis of weapons and I am trying something like:
$params = [
‘index’ => Constants::INDEX,
‘type’ => Constants::DOC_TYPE_USERS,
‘body’ => [
“query”=> [
“bool”=> [
“must”=> [ “match”=> [ “weapons.id”=>$params[‘weapons’] ]],

“should”=> [
[ “match”=> [ “firstname”=> $params[‘search_text’] ]],
[ “match”=> [ “lastname”=> $params[‘search_text’] ]]
]
]
]
]

];

as I am using elastic in PHP. Here $params[‘weapons’] is:
array (size=2)
0 => string ‘1’ (length=1)
1 => string ‘2’ (length=1)

I get an error:

illegal_state_exception: Can’t get text on a START_ARRAY at 1:36

Any suggestions/help would be appreciated that how I can match array. I took reference from nested datatypes
Update#1:
parameters I am sending to my function: {“from”:0,”size”:null,”city”:null,”state”:”0″,”weapons”:[“1″,”2″],”activities”:[],”species”:[],”properties”:[],”search_text”:”lastname”}
update#2:
Body of my query in json format:
{
“index”: “abc_local”,
“type”: “users”,
“body”: {
“query”: {
“bool”: {
“must”: {
“match”: {
“weapons.id”: [“1”, “2”]
}
},
“should”: [{
“match”: {
“firstname”: “simer”
}
}, {
“match”: {
“lastname”: “simer”
}
}]
}
}
}
}

Solutions/Answers:

Solution 1:

You can simply replace the first match query by a terms one as match doesn’t work with arrays of values.

 $params = [
        'index' => Constants::INDEX,
        'type' => Constants::DOC_TYPE_USERS,
        'body' => [
            "query"=> [
                "bool"=> [
                    "must"=>   [ "terms"=> [ "weapons.id"=>$params['weapons'] ]],
                                    ^
                                    |
                               change this

                    "should"=> [
                                [ "match"=> [ "firstname"=> $params['search_text'] ]],
                                [ "match"=> [ "lastname"=> $params['search_text']   ]]
                                    ]
                                ]
                        ]
                    ]

            ];

Solution 2:

if u want to check if any value from array matches to field from index then you have to “terms” instead of match.

{
    "index": "abc_local",
    "type": "users",
    "body": {
        "query": {
            "bool": {
                "must": {
                    "terms": {
                        "weapons.id": ["1", "2"]
                    }
                },
                "should": [{
                    "match": {
                        "firstname": "simer"
                    }
                }, {
                    "match": {
                        "lastname": "simer"
                    }
                }]
            }
        }
    }
}

refer “Terms Level Query” in ElasticSearch docs.

References