반응형
1. 리눅스 시스템의 사용자를 추가하려고 한다. 아래의 조건을 만족하도록 사용자 계정을 추가하는 명령어를 적으시오.
[root@ihd ~]# useradd -u ( ① ) -g ihdg -G
support -( ② ) /bin/bash -( ③ )
2015-12-30 ihd
<조 건>
- 로그인 아이디 : ihd
- UID : 520
- 소속될 그룹 : ihdg(기본 그룹, GID : 500), support(GID : 501)
- 기본 쉘(Shell) : /bin/bash
- 계정사용 종료일 : 2015년 12월 30일
[root@server1 /]# useradd --help
Usage: useradd [options] LOGIN
useradd -D
useradd -D [options]
Options:
-b, --base-dir BASE_DIR base directory for the home directory of the
new account
-c, --comment COMMENT GECOS field of the new account
-d, --home-dir HOME_DIR home directory of the new account
-D, --defaults print or change default useradd configuration
-e, --expiredate EXPIRE_DATE expiration date of the new account
-f, --inactive INACTIVE password inactivity period of the new account
-g, --gid GROUP name or ID of the primary group of the new
account
-G, --groups GROUPS list of supplementary groups of the new
account
-h, --help display this help message and exit
-k, --skel SKEL_DIR use this alternative skeleton directory
-K, --key KEY=VALUE override /etc/login.defs defaults
-l, --no-log-init do not add the user to the lastlog and
faillog databases
-m, --create-home create the user's home directory
-M, --no-create-home do not create the user's home directory
-N, --no-user-group do not create a group with the same name as
the user
-o, --non-unique allow to create users with duplicate
(non-unique) UID
-p, --password PASSWORD encrypted password of the new account
-r, --system create a system account
-R, --root CHROOT_DIR directory to chroot into
-s, --shell SHELL login shell of the new account
-u, --uid UID user ID of the new account
-U, --user-group create a group with the same name as the user
-Z, --selinux-user SEUSER use a specific SEUSER for the SELinux user mapping
① -u, --uid UID user ID of the new account
② -s, --shell SHELL login shell of the new account
③ -e, --expiredate EXPIRE_DATE expiration date of the new account
① 520
② s
③ e
2. 하드디스크들을 더 효율적이고 유연하게 관리할 수 있도록 LVM (Logical Volume Manager)을 통하여 생성하고자
한다. 파티션 디바이스의 /dev/sdb1에 10G, /dev/sdb2에 10G를 이용하여 20G의 /data 디렉토리를 생성하는 과정을
적으시오.
(1) PV (Physical Volume) 생성
[root@ihd ~]# ( ① ) /dev/sdb1 /dev/sdb2
(2) VG (Volume Group) 볼륨 그룹 생성
[root@ihd ~]# ( ② ) ihd_vg /dev/sdb1 /dev/sdb2
(3) LV (Logical Volume) 생성
[root@ihd ~]# ( ③ ) -( ④ ) 20G ihd_vg
-( ⑤ ) data
[root@server1 /]# man LVM
COMMANDS
The following commands implement the core LVM functionality.
pvchange — Change attributes of a Physical Volume.
pvck — Check Physical Volume metadata.
pvcreate — Initialize a disk or partition for use by LVM.
pvdisplay — Display attributes of a Physical Volume.
pvmove — Move Physical Extents.
pvremove — Remove a Physical Volume.
pvresize — Resize a disk or partition in use by LVM2.
pvs — Report information about Physical Volumes.
pvscan — Scan all disks for Physical Volumes.
vgcfgbackup — Backup Volume Group descriptor area.
vgcfgrestore — Restore Volume Group descriptor area.
vgchange — Change attributes of a Volume Group.
vgck — Check Volume Group metadata.
vgconvert — Convert Volume Group metadata format.
vgcreate — Create a Volume Group.
vgdisplay — Display attributes of Volume Groups.
vgexport — Make volume Groups unknown to the system.
vgextend — Add Physical Volumes to a Volume Group.
vgimport — Make exported Volume Groups known to the system.
vgimportclone — Import and rename duplicated Volume Group (e.g. a hardware snapshot).
vgmerge — Merge two Volume Groups.
vgmknodes — Recreate Volume Group directory and Logical Volume special files
vgreduce — Reduce a Volume Group by removing one or more
Physical Volumes.
vgremove — Remove a Volume Group.
vgrename — Rename a Volume Group.
vgs — Report information about Volume Groups.
vgscan — Scan all disks for Volume Groups and rebuild caches.
vgsplit — Split a Volume Group into two, moving any logical
volumes from one Volume Group to another by moving entire Physical Volumes.
lvchange — Change attributes of a Logical Volume.
lvconvert — Convert a Logical Volume from linear to mirror or snapshot.
lvcreate — Create a Logical Volume in an existing Volume Group.
lvdisplay — Display attributes of a Logical Volume.
lvextend — Extend the size of a Logical Volume.
lvmchange — Change attributes of the Logical Volume Manager.
lvmdiskscan — Scan for all devices visible to LVM2.
lvmdump — Create lvm2 information dumps for diagnostic purposes.
lvreduce — Reduce the size of a Logical Volume.
lvremove — Remove a Logical Volume.
lvrename — Rename a Logical Volume.
lvresize — Resize a Logical Volume.
lvs — Report information about Logical Volumes.
lvscan — Scan (all disks) for Logical Volumes.
The following commands are not implemented in LVM2 but might be in the future: lvmsadc, lvmsar, pvdata.
Cache Examples
Example 1: Creating a simple cache LV.
0. Create the origin LV
# lvcreate -L 10G -n lvx vg /dev/slow_dev
1. Create a cache data LV
# lvcreate -L 1G -n lvx_cache vg /dev/fast_dev
2. Create a cache metadata LV (~1/1000th size of CacheDataLV or 8MiB)
# lvcreate -L 8M -n lvx_cache_meta vg /dev/fast_dev
3. Create a cache pool LV, combining cache data LV and cache metadata LV
# lvconvert --type cache-pool --poolmetadata vg/lvx_cache_meta \
vg/lvx_cache
4. Create a cached LV by combining the cache pool LV and origin LV
# lvconvert --type cache --cachepool vg/lvx_cache vg/lvx
Example 2: Creating a cache LV with a fault tolerant cache pool LV.
Users who are concerned about the possibility of failures in their fast devices that could lead to data
loss might consider making their cache pool sub-LVs redundant. Example 2 illustrates how to do that.
Note that only steps 1 & 2 change.
0. Create an origin LV we wish to cache
# lvcreate -L 10G -n lvx vg /dev/slow_devs
1. Create a 2-way RAID1 cache data LV
# lvcreate --type raid1 -m 1 -L 1G -n lvx_cache vg \
/dev/fast1 /dev/fast2
2. Create a 2-way RAID1 cache metadata LV
# lvcreate --type raid1 -m 1 -L 8M -n lvx_cache_meta vg \
/dev/fast1 /dev/fast2
3. Create a cache pool LV combining cache data LV and cache metadata LV
# lvconvert --type cache-pool --poolmetadata vg/lvx_cache_meta \
vg/lvx_cache
4. Create a cached LV by combining the cache pool LV and origin LV
# lvconvert --type cache --cachepool vg/lvx_cache vg/lvx
Example 3: Creating a simple cache LV with writethough caching.
Some users wish to ensure that any data written will be stored both in the cache pool LV and on the ori‐
gin LV. The loss of a device associated with the cache pool LV in this case would not mean the loss of
any data. When combining the cache data LV and the cache metadata LV to form the cache pool LV, proper‐
ties of the cache can be specified - in this case, writethrough vs. writeback. Note that only step 3 is
affected in this case.
0. Create an origin LV we wish to cache (yours may already exist)
# lvcreate -L 10G -n lvx vg /dev/slow
1. Create a cache data LV
# lvcreate -L 1G -n lvx_cache vg /dev/fast
2. Create a cache metadata LV
# lvcreate -L 8M -n lvx_cache_meta vg /dev/fast
3. Create a cache pool LV specifying cache mode "writethrough"
# lvconvert --type cache-pool --poolmetadata vg/lvx_cache_meta \
--cachemode writethrough vg/lvx_cache
4. Create a cache LV by combining the cache pool LV and origin LV
# lvconvert --type cache --cachepool vg/lvx_cache vg/lvx
① pvcreate
② vgcreate
③ lvcreate
④ L
⑤ n
3. 관리자인 홍길동은 리눅스 시스템을 모니터링 하고자 한다. ( 괄호 ) 안에 알맞은 내용을 적으시오.
(1) ( ① )는 리눅스 시스템의 전반적인 운용상황을 실시간으로 모니터링하거나 프로세스 관리를
할 수 있도록 사용하는 유틸리티이다. 아래는 ( ① )을 실행하면서 옵션으로
“a : 메모리 사용에 따라 정렬”, “H : 모든 개별 쓰레드가 보여짐”의 옵션을이용하여 출력한 상태이다.
또한, 프로그램을 실행한 후에 모든 CPU의 상황을 보기위하여 명령어 “( ② )”을 수행한 후의 화면이다.
??? - 18:28:12 up 4:23, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 314 total, 1 running, 313 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa,
0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa,
0.0%hi, 0.0%si, 0.0%st
Mem: 16019528k total, 528320k used, 15491208k free, 36836k buffers
Swap: 10485752k total, 0k used, 10485752k free, 160200k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2322 root 20 0 188m 35m 1716 S 0.0 0.2 0:00.06 glusterfs
2323 root 20 0 188m 35m 1716 S 0.0 0.2 0:00.00 glusterfs
2336 root 20 0 188m 35m 1716 S 0.0 0.2 0:00.00 glusterfs
(2) PID가 2322, 2323인 두 개의 프로세스를 무조건 중지시키기 위해 명령어 kill 을 한번만 이용하여
완성하시오.
[root@ihd ~]# kill ( ③ ) 2322 2323
(3) 시스템의 모든 프로세스들을 트리(tree)구조로 확인할 수 있는 명령어는 ( ④ ) 이다.
[root@server1 /]# man top
Summary-Area-defaults
'l' - Load Avg/Uptime On (thus program name)
't' - Task/Cpu states On (1+1 lines, see '1')
'm' - Mem/Swap usage On (2 lines worth)
'1' - Single Cpu On (thus 1 line if smp)
top - 16:36:47 up 5:20, 2 users, load average: 0.00, 0.03, 0.05
Tasks: 403 total, 1 running, 400 sleeping, 2 stopped, 0 zombie
%Cpu(s): 0.5 us, 0.7 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem: 3869044 total, 1335124 used, 2533920 free, 1076 buffers
KiB Swap: 4079612 total, 0 used, 4079612 free. 394028 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
85046 root 20 0 124132 1952 1180 R 2.3 0.1 0:01.03 top
1506 gdm 20 0 1015040 19740 12280 S 0.7 0.5 0:39.72 gnome-settings-
139 root 20 0 0 0 0 S 0.3 0.0 0:08.43 rcuos/1
992 root 20 0 269728 4292 3456 S 0.3 0.1 0:24.28 vmtoolsd
1654 gdm 20 0 1188076 101504 29136 S 0.3 2.6 0:16.98 gnome-shell
1 root 20 0 192012 7192 3980 S 0.0 0.2 0:13.39 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.08 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.23 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root rt 0 0 0 0 S 0.0 0.0 0:00.41 migration/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/0
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/1
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/2
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/3
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/4
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/5
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/6
16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/7
17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/8
18 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/9
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/10
20 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/11
21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/12
22 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/13
23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/14
24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/15
25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/16
26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/17
27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/18
28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/19
29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/20
30 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/21
31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/22
32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/23
33 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/24
34 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/25
35 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/26
36 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/27
37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/28
38 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/29
39 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/30
40 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/31
41 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/32
42 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/33
43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/34
44 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/35
45 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/36
46 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/37
47 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/38
48 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/39
49 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/40
1 입력후
top - 16:37:18 up 5:20, 2 users, load average: 0.00, 0.03, 0.05
Tasks: 403 total, 4 running, 397 sleeping, 2 stopped, 0 zombie
%Cpu0 : 14.2 us, 6.1 sy, 0.0 ni, 79.3 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
%Cpu1 : 12.7 us, 9.6 sy, 0.0 ni, 77.4 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem: 3869044 total, 1336232 used, 2532812 free, 1076 buffers
KiB Swap: 4079612 total, 0 used, 4079612 free. 394056 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6368 root 20 0 442572 125196 11196 S 16.4 3.2 5:40.04 setroubleshootd
757 root 20 0 56604 15736 15260 S 5.6 0.4 2:45.75 systemd-journal
978 root 20 0 4360 596 504 R 3.9 0.0 4:42.68 rngd
1954 root 20 0 347088 16484 10912 S 1.6 0.4 0:45.76 rsyslogd
85046 root 20 0 124132 1952 1180 R 1.3 0.1 0:01.58 top
1 root 20 0 192012 7192 3980 S 0.3 0.2 0:13.41 systemd
137 root 20 0 0 0 0 R 0.3 0.0 0:14.00 rcu_sched
950 root 16 -4 116792 1812 1332 S 0.3 0.0 0:01.61 auditd
981 root 20 0 391552 3912 3128 S 0.3 0.1 0:01.37 accounts-daemon
1004 dbus 20 0 37800 2728 1456 S 0.3 0.1 0:05.17 dbus-daemon
1506 gdm 20 0 1015040 19740 12280 S 0.3 0.5 0:39.82 gnome-settings-
2 root 20 0 0 0 0 S 0.0 0.0 0:00.08 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.23 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root rt 0 0 0 0 S 0.0 0.0 0:00.42 migration/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/0
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/1
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/2
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/3
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/4
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/5
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/6
16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/7
17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/8
18 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/9
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/10
20 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/11
21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/12
22 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/13
23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/14
24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/15
25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/16
26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/17
27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/18
28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/19
29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/20
30 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/21
31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/22
32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/23
33 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/24
34 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/25
35 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/26
36 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/27
37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/28
38 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/29
39 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/30
40 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/31
41 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/32
42 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/33
43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuob/34
[root@server1 /]# kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR
31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7
58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX
[root@server1 /]# pstree --help
pstree: unrecognized option '--help'
Usage: pstree [ -a ] [ -c ] [ -h | -H PID ] [ -l ] [ -n ] [ -p ] [ -g ] [ -u ]
[ -A | -G | -U ] [ PID | USER ]
pstree -V
Display a tree of processes.
-a, --arguments show command line arguments
-A, --ascii use ASCII line drawing characters
-c, --compact don't compact identical subtrees
-h, --highlight-all highlight current process and its ancestors
-H PID,
--highlight-pid=PID highlight this process and its ancestors
-g, --show-pgids show process group ids; implies -c
-G, --vt100 use VT100 line drawing characters
-l, --long don't truncate long lines
-n, --numeric-sort sort output by PID
-N type,
--ns-sort=type sort by namespace type (ipc, mnt, net, pid, user, uts)
-p, --show-pids show PIDs; implies -c
-s, --show-parents show parents of the selected process
-S, --ns-changes show namespace transitions
-u, --uid-changes show uid transitions
-U, --unicode use UTF-8 (Unicode) line drawing characters
-V, --version display version information
-Z,
--security-context show SELinux security contexts
PID start at this PID; default is 1 (init)
USER show only trees rooted at processes of this user
① top
② 1
③ -9
④ pstree
반응형
'ETC > 자격증' 카테고리의 다른 글
[리눅스마스터]제1401회 리눅스마스터 1급 2차 시험 단답식 풀이 3 (0) | 2022.03.18 |
---|---|
[리눅스마스터]제1401회 리눅스마스터 1급 2차 시험 단답식 풀이 2 (0) | 2022.03.18 |
[리눅스마스터/네트워크관리사]서브넷팅 문제 10 (0) | 2022.02.02 |
[리눅스마스터/네트워크관리사]서브넷팅 문제 9 (0) | 2022.02.02 |
[리눅스마스터/네트워크관리사]서브넷팅 문제 8 (0) | 2022.02.02 |