Ignite Thick Client를 알아보자!

Karim·2021년 12월 3일
3

Ignite 운영

목록 보기
6/7
post-thumbnail

1. Version

💬

  • OS : CentOS Linux release 7.8.2003(Core)
  • Ignite : 2.11.0
  • Docker : 19.03.13

2. Thick Client란

💬

  • 씩 클라이언트 노드와 서버 노드의 차이점은 물리적이기 보다는 논리적이다.
  • 씩 클라이언트는 캐싱, 컴퓨팅, 서비스 배포 및 기타 유사한 활동에는 팜여하지 않지만 내부적으로는 서버와 거의 동일한 방식으로 작동한다.
  • 씩 클라이언트는 기본적으로 클라이언트 모드 에서 실행되는 일반 Ignite 노드이다.
  • 표준 소켓 연결을 통해 서버단에 개별 설치된 ignite 클러스터에 연결하는 경량 클라이언트이다.
  • 클러스터의 topology에 일부이므로 캐시에 적용한 데이터 배열 기술과 파티셔닝 구성을 인식한다.

3. 언제 쓰는게 좋을까?

💬

  • 응용 프로그램이 서버 노드가 실행되는 동일한 환경에 배포되고 해당 프로그램과 모든 서버 노드 사이에 전체 네트워크연결이 있을 때 사용하면 된다.
  • 씩 클라이언트의 특징은 near cachescontinuous queries 가 있다.

4. Thick java 구현

💬 thick client ignite

Ignite ignite = Ignition.start("config/ignite-configuration.xml");

💬 ignite-configuration.xml

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite configuration with all defaults and enabled p2p deployment and enabled events.
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/util
        http://www.springframework.org/schema/util/spring-util.xsd">
    <bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
                      <!-- thick client 모드로 설정 -->
        <property name="clientMode" value="true"/>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <!-- prevent this client from reconnecting on connection loss -->
                <property name="clientReconnectDisabled" value="true"/>
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>붙을 이그나이트 서버IP:Spi포트</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>



        <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
                <property name="slowClientQueueLimit" value="1000"/>
            </bean>
        </property>
    </bean>
</beans>

5. near caches

💬

  • 분할 및 복제된 캐시는 온힙 메모리에 가장 최근 또는 가장 자주 액세스한 데이터를 저장하는 더 작은 로컬 캐시이다.
  • client 쪽에서 메소드로 구현이 가능하다.
	// 캐시 설정할 때
        NearCacheConfiguration<Object, Object> nearCfg = new NearCacheConfiguration<>();
        nearCfg.setNearEvictionPolicyFactory(new LruEvictionPolicyFactory<>(100_000));

        IgniteCache<Object, Object> cache = ignite.getOrCreateCache(new CacheConfiguration<>("PUBLIC"), nearCfg);

6. continuous queries

💬

  • 연속 쿼리를 사용하면 ignite 캐시에서 발생하는 데이터 수정 사항을 모니터링 할 수 있다.
  • 연속 쿼리가 시작되면 쿼리 필터에 해당하는 모든 데이터 변경 사항에 대한 알림을 받게 된다.

7. Thick Client로 붙을 때 ignite server.log

💬 붙을 때 연결 log

[00:58:06,277][INFO][tcp-disco-srvr-[:39432]-#3-#98][TcpDiscoverySpi] TCP discovery accepted incoming connection [rmtAddr=/192.168.124.238, rmtPort=35199]
[00:58:06,277][INFO][tcp-disco-srvr-[:39432]-#3-#98][TcpDiscoverySpi] TCP discovery spawning a new thread for connection [rmtAddr=/192.168.124.238, rmtPort=35199]
[00:58:06,277][INFO][tcp-disco-sock-reader-[]-#68-#29013][TcpDiscoverySpi] Started serving remote node connection [rmtAddr=/192.168.124.238:35199, rmtPort=35199]
[00:58:06,287][INFO][tcp-disco-sock-reader-[67f52380 192.168.124.238:35199 client]-#68-#29013][TcpDiscoverySpi] Initialized connection with remote client node [nodeId=67f52380-1d53-4820-ab07-73b7e35f6eaf, rmtAddr=/192.168.124.238:35199]
[00:58:06,381][INFO][disco-event-worker-#99][GridDiscoveryManager] Added new node to topology: TcpDiscoveryNode [id=67f52380-1d53-4820-ab07-73b7e35f6eaf, consistentId=67f52380-1d53-4820-ab07-73b7e35f6eaf, addrs=ArrayList [127.0.0.1, 172.17.0.2], sockAddrs=HashSet [/127.0.0.1:0, /172.17.0.2:0], discPort=0, order=66, intOrder=34, lastExchangeTime=1639097886330, loc=false, ver=2.11.0#20210911-sha1:8f3f07d3, isClient=true]
[00:58:06,382][INFO][disco-event-worker-#99][GridDiscoveryManager] Topology snapshot [ver=66, locNode=8013adce, servers=1, clients=1, state=ACTIVE, CPUs=40, offheap=50.0GB, heap=2.0GB, aliveNodes=[TcpDiscoveryNode [id=8013adce-d3a5-46f7-8b8a-2ad0ce0685aa, consistentId=9439e851-2302-40d7-b545-80552a3615d2, isClient=false, ver=2.11.0#20210911-sha1:8f3f07d3], TcpDiscoveryNode [id=67f52380-1d53-4820-ab07-73b7e35f6eaf, consistentId=67f52380-1d53-4820-ab07-73b7e35f6eaf, isClient=true, ver=2.11.0#20210911-sha1:8f3f07d3]]]
[00:58:06,382][INFO][disco-event-worker-#99][GridDiscoveryManager]   ^-- Baseline [id=0, size=1, online=1, offline=0]
[00:58:06,382][INFO][exchange-worker-#100][time] Started exchange init [topVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], crd=true, evt=NODE_JOINED, evtNode=67f52380-1d53-4820-ab07-73b7e35f6eaf, customEvt=null, allowMerge=true, exchangeFreeSwitch=false]
[00:58:06,385][INFO][exchange-worker-#100][GridDhtPartitionsExchangeFuture] Finish exchange future [startVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], err=null, rebalanced=true, wasRebalanced=true]
[00:58:06,387][INFO][exchange-worker-#100][GridDhtPartitionsExchangeFuture] Completed partition exchange [localNode=8013adce-d3a5-46f7-8b8a-2ad0ce0685aa, exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode [id=67f52380-1d53-4820-ab07-73b7e35f6eaf, consistentId=67f52380-1d53-4820-ab07-73b7e35f6eaf, addrs=ArrayList [127.0.0.1, 172.17.0.2], sockAddrs=HashSet [/127.0.0.1:0, /172.17.0.2:0], discPort=0, order=66, intOrder=34, lastExchangeTime=1639097886330, loc=false, ver=2.11.0#20210911-sha1:8f3f07d3, isClient=true], rebalanced=true, done=true, newCrdFut=null], topVer=AffinityTopologyVersion [topVer=66, minorTopVer=0]]
[00:58:06,387][INFO][exchange-worker-#100][GridDhtPartitionsExchangeFuture] Exchange timings [startVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], stage="Waiting in exchange queue" (0 ms), stage="Exchange parameters initialization" (0 ms), stage="Determine exchange type" (1 ms), stage="Exchange done" (2 ms), stage="Total time" (3 ms)]
[00:58:06,387][INFO][exchange-worker-#100][GridDhtPartitionsExchangeFuture] Exchange longest local stages [startVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], resVer=AffinityTopologyVersion [topVer=66, minorTopVer=0]]
[00:58:06,387][INFO][exchange-worker-#100][time] Finished exchange init [topVer=AffinityTopologyVersion [topVer=66, minorTopVer=0], crd=true]
[00:58:06,389][INFO][exchange-worker-#100][GridCachePartitionExchangeManager] Skipping rebalancing (no affinity changes) [top=AffinityTopologyVersion [topVer=66, minorTopVer=0], evt=NODE_JOINED, evtNode=67f52380-1d53-4820-ab07-73b7e35f6eaf, client=false]
[00:58:56,068][INFO][db-checkpoint-thread-#107][Checkpointer] Skipping checkpoint (no pages were modified) [checkpointBeforeLockTime=11ms, checkpointLockWait=0ms, checkpointListenersExecuteTime=16ms, checkpointLockHoldTime=17ms, reason='timeout']

💬 붙고 난 후 metric log

[00:59:00,676][INFO][grid-timeout-worker-#38][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=8013adce, uptime=1 day, 16:03:11.330]
    ^-- Cluster [hosts=2, CPUs=40, servers=1, clients=1, topVer=66, minorTopVer=0]
    ^-- Network [addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 172.18.0.1, 172.19.0.1, 172.20.0.1, 172.21.1.1, 172.21.2.1, 172.21.3.1, 192.168.122.1, 192.168.124.250], discoPort=39432, commPort=47100]
    ^-- CPU [CPUs=16, curLoad=0.1%, avgLoad=0.05%, GC=0%]
    ^-- Heap [used=358MB, free=64.23%, comm=1003MB]
    ^-- Off-heap memory [used=40MB, free=99.92%, allocated=51738MB]
    ^-- Page memory [pages=10234]
    ^--   sysMemPlc region [type=internal, persistence=true, lazyAlloc=false,
      ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%, allocRam=100MB, allocTotal=0MB]
    ^--   default region [type=default, persistence=true, lazyAlloc=true,
      ...  initCfg=256MB, maxCfg=51538MB, usedRam=40MB, freeRam=99.92%, allocRam=51538MB, allocTotal=109MB]
    ^--   metastoreMemPlc region [type=internal, persistence=true, lazyAlloc=false,
      ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.93%, allocRam=0MB, allocTotal=0MB]
    ^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,
      ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=100MB, allocTotal=0MB]
    ^--   volatileDsMemPlc region [type=user, persistence=false, lazyAlloc=true,
      ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=0MB]
    ^-- Ignite persistence [used=109MB]
    ^-- Outbound messages queue [size=0]
    ^-- Public thread pool [active=0, idle=0, qSize=0]
    ^-- System thread pool [active=0, idle=16, qSize=0]
    ^-- Striped thread pool [active=0, idle=16, qSize=0]

Cluster [hosts=2, CPUs=40, servers=1, clients=1, topVer=66, minorTopVer=0]
clients= 부분이 1로 된걸 확인 할 수 있다.

📚 참고

profile
나도 보기 위해 정리해 놓은 벨로그

0개의 댓글