[prev in list] [next in list] [prev in thread] [next in thread] List: gluster-users Subject: Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes From: Yuhao Zhang <zzyzxd () gmail ! com> Date: 2018-08-28 4:23:17 Message-ID: 14692A7F-3379-4C15-8BC4-54FA16452C6B () gmail ! com [Download RAW message or body] [Attachment #2 (multipart/alternative)] Hi Xavi, I went back and checked ZFS stats, and it seemed to be behaving normal during the \ event (one glusterfsd started to use 100% CPU at around 1:40, where zfs target size \ started to increase): So looks like ZFS was able to always: 1. keep data and meta usage under the target \ (half of the total physical RAM, which is 32GB). 2. Release RAM when the OS required. Today I observed another hanging occurred on a different server. htop showed that \ most of the glusterfsd processes were stuck in D status using 0 CPU while one or two \ of them was in R status using 100%. I also saw an updatedb.mlocate in D status with \ 100% CPU (scheduled daily cron job). I am not sure if they are related. But since I \ don't use mlocate, I disabled that. Thanks, Yuhao > On Aug 23, 2018, at 18:28, Xavi Hernandez <jahernan@redhat.com> wrote: > > Hi Yuhao, > > sorry for the late answer. I've had holidays and just returned. > > On Wed, 8 Aug 2018, 07:49 Yuhao Zhang, <zzyzxd@gmail.com <mailto:zzyzxd@gmail.com>> \ > wrote: Hi Xavi, > > Thank you for the suggestions, these are extremely helpful. I haven't thought it \ > could be ZFS problem. I went back and checked a longer monitoring window and now I \ > can see a pattern. Please see this attached Grafana screenshot (also available \ > here: https://cl.ly/070J2y3n1u0F <https://cl.ly/070J2y3n1u0F> . Note that the data \ > gaps were when I took down the server for rebooting): > > > Between 8/4 - 8/6, I tried two transfer tests, and experienced 2 the gluster \ > hanging problems. One during the first transfer, and another one happened shortly \ > after the second transfer. I blocked both in pink lines. > Looks like during my transfer tests, free memory was almost exhausted. The system \ > has a very high cached memory, which I think was due to ZFS ARC. However, I am \ > under the impression that ZFS will release space from ARC if it observes low system \ > available memory. I am not sure why it didn't do that. > Yes, it should release memory, but for some reason I don't understand, when there's \ > high metadata load, it's not able to release the allocated memory fast enough (or \ > so it seems). I've observed high CPU utilization by a ZFS process at this point. > > > I did't tweak related ZFS parameters. zfs_arc_max was set to 0 (default value). \ > According to doc, it is "Max arc size of ARC in bytes. If set to 0 then it will \ > consume 1/2 of system RAM." So it appeared that this setting didn't work. > From my experience, with high metadata load this limit is not respected. Using 1/8 \ > of system RAM seemed to keep memory consumption under control, at least for the \ > workloads I used. > In theory, ZFS 0.7.x.y should solve the memory management problems, but I haven't \ > tested it. > > When the server was under heavy IO, the used memory was instead decreased, which I \ > can't explain. > I've only seen this problem when accessing large amounts of different files \ > (typical on a copy, rsync or find on a volume with thousands or millions of files \ > and directories). However, high IO on small set of files doesn't cause any trouble. \ > It's related with caching of metadata, so high IO on a small set of files doesn't \ > require much metadata. > > May I ask if you, or anyone else in this group, has recommendation on ZFS settings \ > for my setup? My server has 64GB physical memory and 150GB SSD space reserved for \ > L2_ARC.The zpool has 6 vdevs and each has 12TB * 10 hard drives on raidz2. Total \ > usable space in the zpool is 482TB. > As I said, I would try with 1/8 of system memory for ARC (it will use more than \ > that anyway). A drop cache also helps when memory is getting exhausted. It causes \ > ZFS to release memory faster, though I don't consider it a good solution. > Also make sure that zfs_txg_timeout is set to 5 or a similar value to avoid long \ > disk access bursts. Other options to consider, depending on the use case, are: \ > zfs_disable_prefetch=1 and zfs_nocacheflush=1. > For better performance with gluster, xattr option on ZFS datasets should be set to \ > "sa", but this needs to be done on volume creation, before creating files. \ > Otherwise it will only be applied to newer files. To use "sa" safely, version \ > 0.6.5.8 or higher should be used. > Xavi > > > Thank you, > Yuhao > > > On Aug 7, 2018, at 01:36, Xavi Hernandez <jahernan@redhat.com \ > > <mailto:jahernan@redhat.com>> wrote: > > Hi Yuhao, > > > > On Mon, 6 Aug 2018, 15:26 Yuhao Zhang, <zzyzxd@gmail.com \ > > <mailto:zzyzxd@gmail.com>> wrote: Hello, > > > > I just experienced another hanging one hour ago and the server was not even under \ > > heavy IO. > > Atin, I attached the process monitoring results and another statedump. > > > > Xavi, ZFS was fine, during the hanging, I can still write directly to the ZFS \ > > volume. My ZFS version: ZFS: Loaded module v0.6.5.6-0ubuntu16, ZFS pool version \ > > 5000, ZFS filesystem version 5 > > I highly recommend you to upgrade to version 0.6.5.8 at least. It fixes a kernel \ > > panic that can happen when used with gluster. However this is not your current \ > > problem. > > Top statistics show low available memory and high CPU utilization of kswapd \ > > process (along with one of the gluster processes). I've seen frequent memory \ > > management problems with ZFS. Have you configured any ZFS parameters? It's highly \ > > recommendable to tweak some memory limits. > > If that were the problem, there's one thing that should alleviate it (and see if \ > > it could be related): > > echo 3 >/proc/sys/vm/drop_caches > > > > This should be done on all bricks from time to time. You can wait until the \ > > problem appears, but in this case the recovery time can be larger. > > I think this should fix the high CPU usage of kswapd. If so, we'll need to tweak \ > > some ZFS parameters. > > I'm not sure if the high CPU usage of gluster could be related to this or not. > > > > Xavi > > > > Thank you, > > Yuhao > > <Image 2018-08-07 at 23.59.09.png> [Attachment #5 (multipart/related)] [Attachment #7 (unknown)] <html><head><meta http-equiv="Content-Type" content="text/html; \ charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: \ space; line-break: after-white-space;" class="">Hi Xavi,<div class=""><br \ class=""></div><div class="">I went back and checked ZFS stats, and it seemed to be \ behaving normal during the event (one glusterfsd started to use 100% CPU at around \ 1:40, where zfs target size started to increase):</div><div class=""><br \ class=""></div><div class=""><img apple-inline="yes" \ id="3D89C67E-FDFC-4C4E-938B-6B2091A7E206" width="1249" height="558" \ src="cid:9F17D897-B4F0-44B0-BE17-65D100748CD5@akunacapital.local" class=""></div><div \ class=""><br class=""></div><div class="">So looks like ZFS was able to always: 1. \ keep data and meta usage under the target (half of the total physical RAM, which is \ 32GB). 2. Release RAM when the OS required.</div><div class=""><br \ class=""></div><div class="">Today I observed another hanging occurred on a different \ server. htop showed that most of the glusterfsd processes were stuck in D status \ using 0 CPU while one or two of them was in R status using 100%. I also saw an \ updatedb.mlocate in D status with 100% CPU (scheduled daily cron job). I am not sure \ if they are related. But since I don't use mlocate, I disabled that.</div><div \ class=""><br class=""></div><div class="">Thanks,</div><div class="">Yuhao<br \ class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Aug 23, \ 2018, at 18:28, Xavi Hernandez <<a href="mailto:jahernan@redhat.com" \ class="">jahernan@redhat.com</a>> wrote:</div><br \ class="Apple-interchange-newline"><div class=""><div dir="auto" style="caret-color: \ rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; \ font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: \ start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: \ 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=""><div \ class="">Hi Yuhao,</div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class="">sorry for the late answer. I've had holidays and just returned. <br \ class=""><br class=""><div class="gmail_quote" dir="auto"><div dir="ltr" class="">On \ Wed, 8 Aug 2018, 07:49 Yuhao Zhang, <<a href="mailto:zzyzxd@gmail.com" \ class="">zzyzxd@gmail.com</a>> wrote:<br class=""></div><blockquote \ class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; \ border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: \ 1ex;"><div style="word-wrap: break-word; line-break: after-white-space;" class="">Hi \ Xavi,<div class=""><br class=""></div><div class="">Thank you for the suggestions, \ these are extremely helpful. I haven't thought it could be ZFS problem. I went back \ and checked a longer monitoring window and now I can see a pattern. Please see this \ attached Grafana screenshot (also available here: <a \ href="https://cl.ly/070J2y3n1u0F" target="_blank" rel="noreferrer" \ class="">https://cl.ly/070J2y3n1u0F</a> . Note that the data gaps were when I \ took down the server for rebooting):</div><div class=""><br class=""></div><div \ class=""><img id="m_7243594476962546097E2C98C60-010D-41C6-A758-54A51DE54118" \ width="1249" height="586" \ src="cid:B5E7D357-9D6B-4E75-B715-830927DE979F@akunacapital.local" class=""><br \ class=""><div class=""><br class=""></div><div class="">Between 8/4 - 8/6, I tried \ two transfer tests, and experienced 2 the gluster hanging problems. One during the \ first transfer, and another one happened shortly after the second transfer. I blocked \ both in pink lines. </div><div class=""><br class=""></div><div class="">Looks \ like during my transfer tests, free memory was almost exhausted. The system has a \ very high cached memory, which I think was due to ZFS ARC. However, I am under the \ impression that ZFS will release space from ARC if it observes low system available \ memory. I am not sure why it didn't do \ that.</div></div></div></blockquote></div></div><div dir="auto" class=""><br \ class=""></div><div dir="auto" class="">Yes, it should release memory, but for some \ reason I don't understand, when there's high metadata load, it's not able to release \ the allocated memory fast enough (or so it seems). I've observed high CPU utilization \ by a ZFS process at this point.</div><div dir="auto" class=""><br class=""></div><div \ dir="auto" class=""><div class="gmail_quote" dir="auto"><blockquote \ class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; \ border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: \ 1ex;"><div style="word-wrap: break-word; line-break: after-white-space;" \ class=""><div class=""><div class=""> </div><div class=""><br \ class=""></div><div class="">I did't tweak related ZFS parameters. zfs_arc_max \ was set to 0 (default value). According to doc, it is "Max arc size of ARC in bytes. \ If set to 0 then it will consume 1/2 of system RAM." So it appeared that \ this setting didn't work.</div></div></div></blockquote></div></div><div dir="auto" \ class=""><br class=""></div><div dir="auto" class="">From my experience, with high \ metadata load this limit is not respected. Using 1/8 of system RAM seemed to keep \ memory consumption under control, at least for the workloads I used.</div><div \ dir="auto" class=""><br class=""></div><div dir="auto" class="">In theory, ZFS \ 0.7.x.y should solve the memory management problems, but I haven't tested \ it. </div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class=""><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" \ style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; \ border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div style="word-wrap: \ break-word; line-break: after-white-space;" class=""><div class=""><div class=""><br \ class=""></div><div class="">When the server was under heavy IO, the used memory was \ instead decreased, which I can't \ explain.</div></div></div></blockquote></div></div><div dir="auto" class=""><br \ class=""></div><div dir="auto" class="">I've only seen this problem when accessing \ large amounts of different files (typical on a copy, rsync or find on a volume with \ thousands or millions of files and directories). However, high IO on small set of \ files doesn't cause any trouble.</div><div dir="auto" class=""><br \ class=""></div><div dir="auto" class="">It's related with caching of metadata, so \ high IO on a small set of files doesn't require much metadata. </div><div \ dir="auto" class=""><br class=""></div><div dir="auto" class=""><div \ class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin: 0px 0px \ 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: \ rgb(204, 204, 204); padding-left: 1ex;"><div style="word-wrap: break-word; \ line-break: after-white-space;" class=""><div class=""><div class=""><br \ class=""></div><div class="">May I ask if you, or anyone else in this group, has \ recommendation on ZFS settings for my setup? My server has 64GB physical memory and \ 150GB SSD space reserved for L2_ARC.The zpool has 6 vdevs and each has 12TB * 10 hard \ drives on raidz2. Total usable space in the zpool is \ 482TB.</div></div></div></blockquote></div></div><div dir="auto" class=""><br \ class=""></div><div dir="auto" class="">As I said, I would try with 1/8 of system \ memory for ARC (it will use more than that anyway). A drop cache also helps when \ memory is getting exhausted. It causes ZFS to release memory faster, though I don't \ consider it a good solution.</div><div dir="auto" class=""><br class=""></div><div \ dir="auto" class="">Also make sure that zfs_txg_timeout is set to 5 or a similar \ value to avoid long disk access bursts. Other options to consider, depending on the \ use case, are: zfs_disable_prefetch=1 and zfs_nocacheflush=1.</div><div dir="auto" \ class=""><br class=""></div><div dir="auto" class="">For better performance with \ gluster, xattr option on ZFS datasets should be set to "sa", but this needs to be \ done on volume creation, before creating files. Otherwise it will only be applied to \ newer files. To use "sa" safely, version 0.6.5.8 or higher should be \ used. </div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class="">Xavi</div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class=""><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" \ style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; \ border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div style="word-wrap: \ break-word; line-break: after-white-space;" class=""><div class=""><div class=""><br \ class=""></div><div class="">Thank you,</div><div class="">Yuhao</div><div \ class=""><br class=""></div><div class=""><blockquote type="cite" class=""><div \ class="">On Aug 7, 2018, at 01:36, Xavi Hernandez <<a \ href="mailto:jahernan@redhat.com" target="_blank" rel="noreferrer" \ class="">jahernan@redhat.com</a>> wrote:</div><br \ class="m_7243594476962546097Apple-interchange-newline"><div class=""><div dir="auto" \ class=""><div class="">Hi Yuhao, <br class=""><br class=""><div \ class="gmail_quote"><div dir="ltr" class="">On Mon, 6 Aug 2018, 15:26 Yuhao Zhang, \ <<a href="mailto:zzyzxd@gmail.com" target="_blank" rel="noreferrer" \ class="">zzyzxd@gmail.com</a>> wrote:<br class=""></div><blockquote \ class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; \ border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: \ 1ex;"><div style="word-wrap: break-word; line-break: after-white-space;" \ class=""><div class="">Hello,</div><div class=""><br class=""></div>I just \ experienced another hanging one hour ago and the server was not even under heavy \ IO.<div class=""><br class=""></div><div class="">Atin, I attached the process \ monitoring results and another statedump.</div><div class=""><br class=""></div><div \ class="">Xavi, ZFS was fine, during the hanging, I can still write directly to the \ ZFS volume. My ZFS version: ZFS: Loaded module v0.6.5.6-0ubuntu16, ZFS pool version \ 5000, ZFS filesystem version 5</div></div></blockquote></div></div><div dir="auto" \ class=""><br class=""></div><div dir="auto" class="">I highly recommend you to \ upgrade to version 0.6.5.8 at least. It fixes a kernel panic that can happen when \ used with gluster. However this is not your current problem.</div><div dir="auto" \ class=""><br class=""></div><div dir="auto" class="">Top statistics show low \ available memory and high CPU utilization of kswapd process (along with one of the \ gluster processes). I've seen frequent memory management problems with ZFS. Have you \ configured any ZFS parameters? It's highly recommendable to tweak some memory \ limits.</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">If \ that were the problem, there's one thing that should alleviate it (and see if it \ could be related):</div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class="">echo 3 >/proc/sys/vm/drop_caches</div><div dir="auto" class=""><br \ class=""></div><div dir="auto" class="">This should be done on all bricks from time \ to time. You can wait until the problem appears, but in this case the recovery time \ can be larger. </div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class="">I think this should fix the high CPU usage of kswapd. If so, we'll need to \ tweak some ZFS parameters.</div><div dir="auto" class=""><br class=""></div><div \ dir="auto" class="">I'm not sure if the high CPU usage of gluster could be related to \ this or not.</div><div dir="auto" class=""><br class=""></div><div dir="auto" \ class="">Xavi</div><div dir="auto" class=""><div class="gmail_quote"><blockquote \ class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; \ border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: \ 1ex;"><div style="word-wrap: break-word; line-break: after-white-space;" \ class=""><div class=""><div class=""><br class=""></div><div class="">Thank \ you,</div><div class="">Yuhao</div><div class=""></div></div></div><div \ style="word-wrap: break-word; line-break: after-white-space;" class=""><div \ class=""><div class=""></div></div></div><div style="word-wrap: break-word; \ line-break: after-white-space;" class=""><div class=""><div \ class=""></div></div></div><div style="word-wrap: break-word; line-break: \ after-white-space;" class=""><div class=""><div \ class=""></div></div></div></blockquote></div></div></div></div></blockquote></div><br \ class=""></div></div></blockquote></div></div></div><span id="cid:%3C%3E"><Image \ 2018-08-07 at 23.59.09.png></span></div></blockquote></div><br \ class=""></div></body></html> ["PastedGraphic-1.tiff" (PastedGraphic-1.tiff)] MM * �ҀA�� �BaP�d6�DbQ8�V-�FcQ��v=�HdR9$�M'�JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkU��v�_�XlV;%��g�ZmV�e��o�\nW;���w�^oW�����`pX<& ��bqX�f7��drY<�W-��fsY��w=��htZ=&�M��juZ�� ?�� c���6@ ��w��n7�.�s��p�<^#����9\�';���s:�O���v{������|^�Ƿ���|{�?7����{=����/��@/���@��D����t# �Pd/�М7 C��; C�,I�qLE�lCD�?��TaEd_�Q�u�1�}!H1�qH��"�D���'Ȓt�*�Rd�%�Ҝ�*K�̻-K�,�3�sL�5� \ l�7L��9��T�:Md�=�S��>�3��AP3��;P�B�E?��GДu#J�Te/E�ԝ7JS��;MS�-IS�uMEU�mCWT�?Y��UaZUe_]�U�u^�5�}aX5�q[X��b�E�_��gؖu�j�Ve�� @ \ ���xL �C��\B��c1��r/�ŢR(�vA#���L�=(�K�R�\�Y1��g���m?��f�9�eG�O�TjM�H�S��] \ B�O��k5��r�S�ժV*�v�c���M��h�[�V�]��q��o������n�;�函_�XlN ��c��^C���s9��s/���Z,�w \ A����N�=��k�Z�^�Y���w���m���v�=�e��o�\nO���s��_C����{=��s����^.�w����O����{�^�_� \ �����������p+��O�A0 ?�t�p���3B��/ ��%CP�A�q<K�1$=D�tU�q�Yű�q���mƲi!�r,e#�Q�#I2�$G�t�(�r��'ɲ��,J��)��1KR��1�h`0 �4L�$�2�3L�3Γ��8����?�t�A�4,�C�D�E�� D�eBQTm)H��}%GR��5LӴ�1OT4�7STES��%UTU�eKV�uMeW֕�kX���y_�v ua�6-gc�Eoe���d�e�bYVm�h��}�gZ���l۶�o\6��s\�s��%�t]�e�v�wM�wޗ��x�����x ��8-��E��� ��f�aXn)���~%�b��5���1�d8�7�dE���&U�e�fK��yNe�k����y��zu��:.g��Fo������f��iZn������ ��.�����l[.ͳ�Nյ�nݷ�������������/ ��O��o���%��-���5���=��E��/M��OU��o]���e��m���u���}�����/���O���o������� \ ��ϵ�������/���O��o������������������P>�4�h�k� ��H4��&@�)��PF��3 \ d �PvA�=a<#�ЖB�U`�.�P��He a�7�P�BH{ ��0�� DH�bDG�P�&Ch�a�O��(C���LU�QF+�ȳ"\`�Qv+E�c<c�і4F��b�n�Q�4�H�c�w�Q�>FH�� \ �p��@HI!dD��Q�FGi#c̏��HG�)$�L��RFK��3&$\��RvKI�=(e<��ҖTJ�U+dܮ�R�T�Ie-eķ�R�^JI{*��� \ `LI�1fDǙR�fKi�3e�ϙ�hK��4�L՚SFk�ɳ6&\�SvkM�8g9 � ���A�� ��"0�l*!��#x�f9�Hc \ q��5���2y�M$�ʥc8�Φ�ɬ�i@�P�t9�s=��)ty��H��*4�m&�N��+z�f�R�Xku�튵_���6{��d�ڭ��x�ޮ�˭ \ �`�x;��}��1x{�����28�n'!���3|�g9��hsy��5����:}�M����c����ͮ�i��p�|=�s���9|}�����:<�o'����;~�g���x{}�����>���������ϯ��� ��(�`�߷�� \ �2���&��XR��8f��h���؊��X�)���&�"ت/���1�#x�6�#X�4�"� \ 3��)9�dK���E�$�2Q���&P��YRX��9f\��i�[��يZ��Y�i�� �f�&٪o�� �q�'y�v�'Y�t�&� \ s��*y�h����(�2����&����`P���*���*Z���*����*ں��+ʳ�+Zڷ�+�+��, \ ñ,[Dz,�*˳,�:ϴ-Jӵ-[Z-�j۷-�z߸.��.[��.���.ۺ�/��/[��/���/����0�0\�0�+�0�;�1K� \ 1\[�1�k�1�{�2�#�2\�'�2��+�2ܻ/�3�3�3\�7�3��;�3��?�4C��wD�4�+K�4�;O�5KS�5][W�5�k \ [�5�{_�6�c�6]�g�6��k�6ݻo�7�s�7]�w�7��{�7���8��8^��8�+��*��yK��9^[��9�k��9�{�� \ :���:^���:����:��;˳�;^۷�;���;���<��5�?��<�+��<�;��=K��=_[��=�k��=�{��>���>_���>����>��?����A�� �BaP�d6�DbQ8�V-�FcQ��v=�HdR9$�M'�JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkU��v�_�XlV;%��g�ZmV�e��o�\nW;���w�^oW�����`pX<& ��bqX�f7��drY<�W-��fsY��w=��htZ=&�M��juZ�� ��lv[=��m��nw[���}��px\>'���ry\�g7���tz]>�W���v{]��w���x|^?'����z}^�g����|~_?�']��_�������, �L�l��% �-��5 Ð�=�Eđ,M�LUŎ4 ��leƑ�m��uǑ�}��!Ȓ,�#�L�%ɒl�'���)ʒ��+�̵-˒���2��1̓,�3�L�5͓l�7���9Γ��;���=ϓ��?� AД- C�s���� �jҔE-K��5MӔ�=O� EQԕ-MS�MUUՕm]W��eY֕�m[�tQ�� @ ]���^Wԅ�a�6-� XT�� i� R�]qk�͵mۖ�o� �q� �A��T. �C��N)�E���n9�G��I%�I��T�Y-�K���i5�M����y=�O��E�Q��U.�M�S��N�U�U�� \ �n�]�W���e�Y��U��m�[����u�]�����}�_���l:�?�� ��b��| � 8 �� &�����V \ �Y��k���i��m����y��o����q��W/���s��O���u���o���w�����y��W����{������}��<.;� ?�� �@H�'��}���~3l�m+�B��3 Cp�;C�CDq$KD�DSEqd[E�cFq�kF��sGq�{G��!Hr$�#H�D�%Ird�'D���a���� �0�4"��a`lJ!��_���� A \ � �JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkU��v�_�XlV;%��g�ZmV�e��o�\nW;���w�^oW�����`pX<& ��bqX�f7��drY<�W-��fsY��w=��htZ=&�M��juZ�f�]��lv[=��m��nw[���}��px\>'���ry\�g7 A*Q��~ �� .ww���x|^?'����z}^�g����|~_?�����~_�������, �L�l��% �-��5 Ð�=�Z�"�r�"123����{�n�!Ƒ�m��uǑ�}��!Ȓ,�#�L�%ɒl�'���)ʒ��+�̵-˒�/��1̓,�3���тUM��9Γ��;���=ϓ��?� AД- C�MEєmG��%IҔ�-K��5MӋ��)�lb�t�MS�MUUՕm]W��eY֕�m[��u]ו�}_� �aؖ-�c�M�eє�QM�%�iږ��k�͵mۖ�o� �qܗ-�s�M�uݗm�w���yޗ��gM�}GR�������. ��N��n���%�-���5��=�Z L��.M��NU��n]���e�m���u���}��^>��(FG��N���n���������ε���ű� \ �A�� �BaP�d1��{�A�Є6-�FcQ��v=�HdR9$�M'�JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkU��v�_�XlV;%��g�ZmV�e��o�\nW;���w�^oW�����`pX<& ��bqX�f7��drY<�W-��fsY��w=��htZ=&�M��juZ�f�]��lv[<� 8��a8��}��px\>'���ry\�g7���tz]>�W���v{]��w���x|^?'����z}^�g����|~_?�����~_�������� \ ��r�"��l��% �-��5 \ Ð�=�Eđ,M�LUől]��eƑ�m��uǑ�����1�#�L�%ɒl�'���)ʒ��+�̵-˒�/��1̓,�3�L�5͓l�7��� �� @ ;O��;���@PS��A���|� ��G��%IҔ�-K��5MӔ�=O� EQԕ-MS�MUUՕm]W��eY֓�O��\P��>QTd�GV�-�c�M�eٖm�g���iږ��� �{����uGZZ ��J��x o���z#�( ވI�{���j��� �w�� ~�-�s!�a�v�خ-���4� P8$ �BaP�d6�DbQ8�V-�FcQ��v=�HdR9$�M'�JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkS��h:� ?�� \ b�Y�60��l�Z�6����A� ����`pX<& ��bqX�f7��drY<�W-��fsY��w=��htZ=&�M��juZ�f�]��lv[=��@���%i� %R�h�x�] yQ��4 �� +��O��0��v <@ �"�� ��o�X����@*�H �`�� �/� \ ��"�-�����r $�"�B��-��5 Ð�=�Eđ,M�LUől]��eƑ�m��u���KD��Hq��!H�� \ ������)ʒ��+�̵-˒�/��1̓,�3�L�5͓l�7���2� �6 \ A�s�)�>9�( �8 �! 41�n��z�� C�;�ӈI�O��0 �L���C�,�'=\ x�@�`x�� �=���&��iWC������?H!�g�� xZ��R�hAsm�� � �qܗ-�s�M�uݗm�w���yޗ��{���}ߊ 0tU%�+���N��n���%�-���5���=2 \ �" �@� A�d �h q��8 �l � L�Ӟ蠁��P�� � �����} H�Ƭ ��j �q^ P*(�) �3�n�C�8 b�%P��bXmo �7M�:D�0 �*}��� <j"ߖ����/����-����ZV��U�6�o\�U��o]���e��m���u���}��Z�A���� BaP� ��@�|"% ~?1h4!�HdR9$�M'�JeR�d�]/�LfS9��m7�NgS���}?�PhT:%�G�RiT�e6�O�TjU:�V�W�VkU��v�_�XlV;%��g�ZmV�e��o�\nW;���w�^oW�����`pX<& ��bqX�d�� L�@�d� -��`�=!�@�A V+�� �R� $���l �I$� ��p ��HB�p� �g2��5� 9�U��Q%��D_�� �g�@���Z^/�@�d���c��0 <pE��f��*� ���(�>��f�ȉH:D%8%l, ��<���� 5 cH ĉ 2~ �0�a�v�.� �1H�ȉ���/�����S�,�%̔ y"����)ʒ��+�̵-˒�/��1̓,�3�L�5͓l�7���9Γ��;���=ϓ��?�`�0 ��2��h�?@Q�mG��%IҔ�-K��5MӔ�=O� EQԕ-MS�MUUՕm]G��� 39 A��]� �| V�s���t�(�9e���;�U��0 ퟠ � D� \ m��o3�� �"z/� �5���{��� �4L &���B�Gh�} HG���oj@\� H�zImۥHP� vD�RT \ BP�@ J��G�������*̖��IeEA �iG� 4k��n���y��2Q"�2BA%b�n4����&ӊ⽼\ \ ԄЮ�͓9I�r���"�� n� " \ � ���|H T�Bj�� %�bU�r `fts��,K�� !�`߸'�p��|_�;��l��>`8k��� O����U�/���O���o���������ϵ���� ]AP�%�#���o��� �A`�xD&�Ca��F%�Eb�x�f5�Gc���E#�Id�yD�U+�Ke���e3�Mf�y��u;�Og���C�Qh�zE&�K�Si��F�S�Uj�z�f�[�Wk����&�`�U( A����� !@C�� "��.� �F#@ �@ �O�� � % ��_�b�� k� 'S���T 0�L x@ H#�3�i�� %os���� ����F�R �x�] zЕ�gi������ �r8�v�� \ s9�ϕ(�.��i���g� �z<����b܌�p R��h��� ��8yBN����C�\�m�<�#� ���$�"@ gE@ �=��hY�����*z��� I�kY� \ �$) ��A�d��rX ��I�(� \ �2���AD�������J�z^���4�S\�6��|�8�S��:�Ӽ�<�S��>���@�T B��=D�T]F��}!H�T�)J�Խ1L� \ Tڈ ے����"��h!NUU]YV��}aX�U�iZ�սq\�U�y^���`�V�b��=�d�V]�f��|�2�#� BQ� � LD�:"5� 9 �)� @` <C��x��}� C�� � \ �, dQ 8Zcw� DP �(�܀F�˸�: @��� > �����O:"A�� h��T6 0�X �HIPT8y@P���!��� ���W���!K@�4 �c#�0� �d�� �9� \ > ���&�� è��9������;��� �3=g!�r�$Yt�!bK��C;� � ��n��n52���]I���?Q��]_Y���a�� \ > ]�i��ݿq��]�y����OT eGR�u=S�y^_�������^������^߹�������_�M$�!�s��� X8 ��b0~��6 A�r ����E�JiP ��<