多语言展示
当前在线:1090今日阅读:167今日分享:16

elasticsearch 八、重要的配置更改

安装配置之前写过,这里是重要的配置。。。主要是对elastic中Important Configuration Changes这一节的翻译吧,译的不好,所以关键英文原文也会带一些吧。为什么选这一节,因为真的是重要吧,刚好今天有人问了个配置的问题,所以整理下。也供自己以后参考吧,不经常用到时间久了,经常要重头来过也挺烦的。。——by 歪歪elasticsearch.yml请阅读本节内容!呈现所有配置都同样重要,并且不以任何特定的顺序列出。请在所有配置选项读取,并将它们应用到你的集群。Please read this entire section! All configurations presented are equally important, and are not listed in any particular order. Please read through all configuration options and apply them to your cluster.5elasticsearch学习一、安装和配置
方法/步骤
1

Elasticsearch附带了非常好的默认值, 尤其是当它涉及到性能相关的设置和选项。如果有疑问(如果没有弄清楚),请不要动配置。我们目睹了许多因为用了错误的配置而导致集群的毁灭,就是因为他们的管理员认为他们的修改可以使性能百倍的提升!Elasticsearch ships with very good defaults, especially when it comes to performance- related settings and options. When in doubt, just leave the settings alone. We have witnessed countless dozens of clusters ruined by errant settings because the administrator thought he could turn a knob and gain 100-fold improvement.

2

指定集群名称Assign NamesElasticseach默认的集群名称都为elasticseach,为生产环境的集群重命名是一个明智的做法,简单的做法能防止意外的发生,如某个人的笔记本加入到集群中。集群名称改成elasticsearch_production,这么一个简单的修改可以避免很多的痛心。Elasticseach by default starts a cluster named elasticsearch. It is wise to rename your production cluster to something else, simply to prevent accidents whereby someone’s laptop joins the cluster. A simple change to elasticsearch_production can save a lot of heartache.

3

指定节点名称同样的,改变你的节点的名称也是明智的做法。正如你可能已经注意到了,Elasticsearch在启动时为节点随机分配一个漫威超级英雄的名字。在开发环境你可能觉得很萌,但是当凌晨3点你还在试图定位那个物理机怠工了,你就不会觉得很萌啦。更重要的是,因为这些名字是在启动时生成的,每次重启节点,它都会得到一个新的名字。因为这些节点的名字经常的变化,可能导致你的日志非常的混乱。基于它可能带来的麻烦,我们建议你有计划的给每个节点起个有意义的,描述性的名字。这同样在elasticsearch.yml文件中配置Similarly, it is wise to change the names of your nodes. As you’ve probably noticed by now, Elasticsearch assigns a random Marvel superhero name to your nodes at startup. This is cute in development—but less cute when it is 3a.m. and you are trying to remember which physical machine was Tagak the Leopard Lord.More important, since these names are generated on startup, each time you restart your node, it will get a new name. This can make logs confusing, since the names of all the nodes are constantly changing.Boring as it might be, we recommend you give each node a name that makes sense to you—a plain, descriptive name. This is also configured in your elasticsearch.yml:

4

路径配置Paths默认情况下,Eleasticsearch会把插件、日志、最重要的是你的数据都放在安装目录下。这可能会不幸的意外,通过安装新的elasticsearch就可能把安装目录覆盖了。如果你不小心,你可能擦除你所有的数据。不要笑 - 我们已经看到它发生好几次了。最好的做法就是把你的数据目录配置到安装目录以外的地方,同样你也可以配置你的插件和日志的目录。By default, Elasticsearch will place the plug-ins, logs, and—most important—your data in the installation directory. This can lead to unfortunate accidents, whereby the installation directory is accidentally overwritten by a new installation of Elasticsearch. If you aren’t careful, you can erase all your data.Don’t laugh—we’ve seen it happen more than a few times.The best thing to do is relocate your data directory outside the installation location. You can optionally move your plug-in and log directories as well.This can be changed as follows:

5

注意:你可以通过逗号分隔指定多个目录。数据可以保存到多个不同的目录,如果每个目录挂载在不同的硬盘,这是一种简单而有效的方式来建立一个软件RAID 0。Elasticsearch自动会把数据分配到不同的目录,以便提高性能。Notice that you can specify more than one directory for data by using comma-separated lists.Data can be saved to multiple directories, and if each directory is mounted on a different hard drive, this is a simple and effective way to set up a software RAID 0. Elasticsearch will automatically stripe data between the different directories, boosting performance.

6

设置最小主节点数Minimum Master Nodes最小主节点数的设置对集群的稳定是非常重要的。该设置对预防脑裂是有帮助的,即一个集群中存在两个master。脑裂的危害。。。这个配置就是告诉Elasticsearch除非有足够可用的master候选节点,否则就不选举master,只有有足够可用的master候选节点才进行选举。该设置应该始终被配置为有主节点资格的法定节点数,法定节点数:(主节点资格的节点数/2)+1。例如:1、如果你有10个符合规则的节点数,法定数就是6.2、如果你有3个候选master,和100个数据节点,法定数就是2,你只要计算那些有主节点资格的节点数就可以了。3、如果你有2个符合规则的节点数,法定节点数应该是2,但是这意味着如果一个节点狗带了,你的整个集群就不可以用了。设置成1将保证集群的功能,但是就不能防止脑裂了。基于这样的情况,最好的解决就是至少有3个节点。The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1. Here are some examples:If you have ten regular nodes (can hold data, can become master), a quorum is 6.If you have three dedicated master nodes and a hundred data nodes, the quorum is 2, since you need to count only nodes that are master eligible.If you have two regular nodes, you are in a conundrum. A quorum would be 2, but this means a loss of one node will make your cluster inoperable. A setting of 1 will allow your cluster to function, but doesn’t protect against split brain. It is best to have a minimum of three nodes in situations like this.

7

因为Elasticsearch集群是动态的,你可以轻易的添加和删除节点的数,而导致法定节点数的改变。如果你不得不将新的配置配到每个节点然后重启整个集群,仅仅是因为改变了这个配置,这一定会让人抓狂。基于这个原因,minimum_master_nodes(和其他设置)可经由API动态调用进行配置。当你的集群在运行的时候您就可以更改这些设置:But because Elasticsearch clusters are dynamic, you could easily add or remove nodes that will change the quorum. It would be extremely irritating if you had to push new configurations to each node and restart your whole cluster just to change the setting.For this reason, minimum_master_nodes (and other settings) can be configured via a dynamic API call. You can change the setting while your cluster is online:PUT /_cluster/settings{    'persistent' : {        'discovery.zen.minimum_master_nodes' : 2    }}

8

这是一个持久化的配置,生效将优先于配置文件中的配置。当你添加和删除有主节点资格的节点的时候,你需要更改这个配置。This will become a persistent setting that takes precedence over whatever is in the static configuration. You should modify this setting whenever you add or remove master-eligible nodes.

9

集群恢复的设置Recovery Settings当集群重新启动,有几个设置会影响片恢复的行为。 首先,我们需要了解,如果什么都没有配置会发生什么事情。假设你有10个节点,并且每个节点保留一个片——主分片或者副本分片,在一个5主一副本的索引中。因为维护你停掉了所有的集群(例如,安装新的驱动)。在你重启集群的时候,很正常的5个节点别其它5个节点先起来了。可能那5个节点脱离了集群,他们没有立即重启的命令。不过什么原因了,你只启动了5个节点。这5个节点进行互相通信,选举出主节点,形成一个集群。因为5个节点没有加入到集群,所以他们发现数据不再是均匀分布的,然后每个片之间立即进行数据复制。最终,你其他5个节点打开了并且加入到了集群中。他们发现他们的数据已经被复制到其他节点了,因此他们删除他们的本地数据(因为他们现在是多余的,很可能已经过期了)。因为现在集群中节点数从5个变成了10个,集群重新开始平衡。在这整个处理过程中,你的节点不停的消耗磁盘和网络,不停的进行数据转移,而根本没有好的理由让你必须这么做。对于一个TB级数据的大集群,这个无意义的数据清洗过程可能会需要很长的时间。如果所有的节点只是简单的等待整个集群联机,那么所有的节点都是本地数据并且没有什么需要移动的。现在我们知道问题了,我们可以进行一些配置来缓和这个问题。首先,我们需要给Elasticsearch一个硬性的限制:gateway.recover_after_nodes: 8Several settings affect the behavior of shard recovery when your cluster restarts. First, we need to understand what happens if nothing is configured.Imagine you have ten nodes, and each node holds a single shard—either a primary or a replica—in a 5 primary / 1 replica index. You take your entire cluster offline for maintenance (installing new drives, for example). When you restart your cluster, it just so happens that five nodes come online before the other five.Maybe the switch to the other five is being flaky, and they didn’t receive the restart command right away. Whatever the reason, you have five nodes online. These five nodes will gossip with each other, elect a master, and form a cluster. They notice that data is no longer evenly distributed, since five nodes are missing from the cluster, and immediately start replicating new shards between each other.Finally, your other five nodes turn on and join the cluster. These nodes see that their data is being replicated to other nodes, so they delete their local data (since it is now redundant, and may be outdated). Then the cluster starts to rebalance even more, since the cluster size just went from five to ten.During this whole process, your nodes are thrashing the disk and network, moving data around—for no good reason. For large clusters with terabytes of data, this useless shuffling of data can take a really long time. If all the nodes had simply waited for the cluster to come online, all the data would have been local and nothing would need to move.Now that we know the problem, we can configure a few settings to alleviate it. First, we need to give Elasticsearch a hard limit:gateway.recover_after_nodes: 8

10

这将防止Elasticsearch立即开始数据恢复,直到集群中至少有八个(数据节点或主节点)节点存在。改值的设置是个人喜好的问题:你认为提供多少个节点你的集群是可用的?在上面的案例中,我们设置为8,意味着除非至少有8个节点后集群才是可操作的。This will prevent Elasticsearch from starting a recovery until at least eight (data or master) nodes are present. The value for this setting is a matter of personal preference: how many nodes do you want present before you consider your cluster functional? In this case, we are setting it to 8, which means the cluster is inoperable unless there are at least eight nodes.

11

然后我们告诉Elasticsearch应该有多少节点在集群中,并且所有的节点都加入到集群中我们需要等多久:Then we tell Elasticsearch how many nodes should be in the cluster, and how long we want to wait for all those nodes:gateway.expected_nodes: 10 gateway.recover_after_time: 5m

12

这意味着是Elasticsearch将做到以下几点:1、等到集群中有8个节点2、集群开始数据恢复等到5分钟后或者10个节点加入,以先到者为准。这三个设置使你在集群重启是可以避免分片之间出现过多的数据交换。它可以正确的使恢复只需要几秒钟,而不是几个小时。注意:这些设置只可以在config/elasticsearch.yml文件中或者命令行(他们不是动态更新的)中设置,他们在一个完整的集群重启过程中才有意义。What this means is that Elasticsearch will do the following:1、Wait for eight nodes to be present2、Begin recovering after 5 minutes or after ten nodes have joined the cluster, whichever comes first.These three settings allow you to avoid the excessive shard swapping that can occur on cluster restarts. It can literally make recovery take seconds instead of hours.NOTE:These settings can only be set in the config/elasticsearch.yml file or on the command line (they are not dynamically updatable) and they are only relevant during a full cluster restart.

13

单播代替组播 Prefer Unicast over MulticastElasticsearch被配置为使用单播防止节点意外加入集群。只有节点运行在同一台机器上后才会自动形成集群。虽然组播作为一个插件来提供,他应该永远不用在生产环境。你最不想看到的事就是节点意味加入到你生产环境的集群中,仅仅是因为他们收到了错误的组播信号。组播本身没什么错误。组播导致愚蠢的问题,并且使集群变得脆弱(例如,一个网络攻城狮没有告诉你在捣鼓网络,所有节点突然找不到对方了)。使用单播,给Elasticsearch提供一个应该联系的节点的列表。当一个节点联系到单播列表中的一个节点时,他会接收完整的集群中状态和集群中的所有节点。然后他联系主节点并加入集群。这意味着你的单播列表没必要包含集群中的所有节点。这意味着你的单播列表中没有必要包括集群中的所有节点。一个新的节点只要找到足够的节点可以联系就可以了。如果你使用专门的主节点,那么仅仅列出三个专门的主节点就可以了。在elasticsearch.yml中配置:discovery.zen.ping.unicast.hosts: ['host1', 'host2:port']Elasticsearch is configured to use unicast discovery out of the box to prevent nodes from accidentally joining a cluster. Only nodes running on the same machine will automatically form cluster.While multicast is still provided as a plugin, it should never be used in production. The last thing you want is for nodes to accidentally join your production network, simply because they received an errant multicast ping. There is nothing wrong with multicast per se. Multicast simply leads to silly problems, and can be a bit more fragile (for example, a network engineer fiddles with the network without telling you—and all of a sudden nodes can’t find each other anymore).To use unicast, you provide Elasticsearch a list of nodes that it should try to contact. When a node contacts a member of the unicast list, it receives a full cluster state that lists all of the nodes in the cluster. It then contacts the master and joins the cluster.This means your unicast list does not need to include all of the nodes in your cluster. It just needs enough nodes that a new node can find someone to talk to. If you use dedicated masters, just list your three dedicated masters and call it a day. This setting is configured in elasticsearch.yml:discovery.zen.ping.unicast.hosts: ['host1', 'host2:port']

14

单播和组播这个1.x和2.x的版本默认值有点不太一样我用的是1.7.2的所以应该有两个配置项需要修改

推荐信息