How to install a ceph restful gateway to work with object storage and upload test files?



  • Change directory to the ceph-ansible group vars area

    cd /usr/share/ceph-ansible/group_vars
    

    Edit the the all file and add/make as below entries

    $ sudo vi all
    

    radosgw_dns_name: <your dns name for rados gw>
    radosgw_frontend: civetweb

    Copy the sample rgws file to rgws and modify

    $ sudo cp rgws.sample  rgws
    $ sudo vi rgws
    

    copy_admin_key: true

    Edit the ansible hosts file to add new node for rados gateway

    $ sudo vi /etc/ansible/hosts
    

    [rgws]
    rgw-node1

    Run the ansible playbook

    cd /usr/share/ceph-ansible/
    ansible-playbook site.yml -u ceph
    ...
    ...
    PLAY RECAP ******************************************************************** 
    mon-node1                  : ok=91   changed=2    unreachable=0    failed=0   
    mon-node2                  : ok=91   changed=2    unreachable=0    failed=0   
    mon-node3                  : ok=91   changed=2    unreachable=0    failed=0   
    osd-node1                  : ok=164  changed=2    unreachable=0    failed=0   
    osd-node2                  : ok=164  changed=2    unreachable=0    failed=0   
    osd-node3                  : ok=164  changed=2    unreachable=0    failed=0   
    rgw-node1                  : ok=81   changed=17   unreachable=0    failed=0   
    

    Verify ceph-radosgw service is running on your rados gateway node, and make sure the port is 8080.

    rgw-node1> $ systemctl status [email protected][email protected] - Ceph rados gateway
       Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
       Active: active (running) since Fri 2017-03-17 07:42:23 EDT; 1min 26s ago
     Main PID: 24895 (radosgw)
       CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]
               └─24895 /usr/bin/radosgw -f --cluster ceph --name client.rgw.rgw-node1 --setuser ceph --setgroup ceph
    
    rgw-node1> sudo netstat -plunt | grep -i rados
    tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      24895/radosgw    
    

    Create the user accounts for S3 and Swift access

    rgw-node1> $ ssh rgw-node1
    rgw-node1> $ radosgw-admin user create --uid='user1' --display-name='First User' --access-key='S3user1' --secret-key='S3user1key'
    {
        "user_id": "user1",
        "display_name": "First User",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [],
        "keys": [
            {
                "user": "user1",
                "access_key": "S3user1",
                "secret_key": "S3user1key"
            }
        ],
        "swift_keys": [],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "temp_url_keys": []
    }
    

    For the Swift access

    rgw-node1> $ radosgw-admin subuser create --uid='user1' --subuser='user1:swift' --secret-key='Swiftuser1key' --access=full
    {
        "user_id": "user1",
        "display_name": "First User",
        "email": "",
        "suspended": 0,
        "max_buckets": 1000,
        "auid": 0,
        "subusers": [
            {
                "id": "user1:swift",
                "permissions": "full-control"
            }
        ],
        "keys": [
            {
                "user": "user1",
                "access_key": "S3user1",
                "secret_key": "S3user1key"
            }
        ],
        "swift_keys": [
            {
                "user": "user1:swift",
                "secret_key": "Swiftuser1key"
            }
        ],
        "caps": [],
        "op_mask": "read, write, delete",
        "default_placement": "",
        "placement_tags": [],
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        },
        "temp_url_keys": []
    }
    

    Swift

    Accessing the S3 & Swift APIs, from the client node.

    ceph-client> $ sudo pip install python-swiftclient
    ceph-client> $ swift -A http://rgw-node1:8080/auth/1.0  -U user1:swift -K 'Swiftuser1key' post container-1
    ceph-client> $ swift -A http://rgw-node1:8080/auth/1.0  -U user1:swift -K 'Swiftuser1key' list
    container-1
    

    Create a test file to upload to our swift container.

    ceph-client> $ base64 /dev/urandom | head -c 10000000 > dummy_file1.txt
    

    Upload and list the file in the swift container

    ceph-client> $ swift -A http://rgw-node1:8080/auth/1.0  -U user1:swift -K 'Swiftuser1key' upload container-1 dummy_file1.txt
    dummy_file1.txt
    ceph-client> $ swift -A http://rgw-node1:8080/auth/1.0  -U user1:swift -K 'Swiftuser1key' list container-1
    dummy_file1.txt
    

    Aws S3

    Install and start dnsmasq

    ceph-client> $ sudo yum install -y dnsmasq
    ceph-client> $ echo "address=/.rgw-node1/10.100.2.15" | sudo tee -a /etc/dnsmasq.conf
    address=/.rgw-node1/10.100.2.15
    ceph-client> $ sudo systemctl start dnsmasq
    ceph-client> $ sudo systemctl enable dnsmasq
    Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service.
    

    Edit the /etc/resolv.conf file and add the 127.0.0.1 as a nameserver

    ceph-client> $ sudo vi /etc/resolv.conf 
    

    Make that the subdomain can resolve ok

    ceph-client> $ ping -c 1 anything.rgw-node1
    PING anything.rgw-node1 (10.100.2.15) 56(84) bytes of data.
    64 bytes from rgw-node1.ec2.internal (10.100.2.15): icmp_seq=1 ttl=64 time=0.150 ms
    
    --- anything.rgw-node1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms
    
    ceph-client> $ ping -c 1 mybucket.rgw-node1
    PING mybucket.rgw-node1 (10.100.2.15) 56(84) bytes of data.
    64 bytes from rgw-node1.ec2.internal (10.100.2.15): icmp_seq=1 ttl=64 time=0.123 ms
    
    --- mybucket.rgw-node1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms
    
    ceph-client> $ ping -c 1 mars.rgw-node1
    PING mars.rgw-node1 (10.100.2.15) 56(84) bytes of data.
    64 bytes from rgw-node1.ec2.internal (10.100.2.15): icmp_seq=1 ttl=64 time=0.138 ms
    
    --- mars.rgw-node1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms
    

    Configure the s3cmd command, these are all demo values, where not value is entered, just hit return

    ceph-client> $ s3cmd --configure
    
    Enter new values or accept defaults in brackets with Enter.
    Refer to user manual for detailed description of all options.
    
    Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
    Access Key: S3user1
    Secret Key: S3user1key
    Default Region [US]: 
    
    Encryption password is used to protect your files from reading
    by unauthorized persons while in transfer to S3
    Encryption password: 
    Path to GPG program [/usr/bin/gpg]: 
    
    When using secure HTTPS protocol all communication with Amazon S3
    servers is protected from 3rd party eavesdropping. This method is
    slower than plain HTTP, and can only be proxied with Python 2.7 or newer
    Use HTTPS protocol [Yes]: No
    
    On some networks all internet access must go through a HTTP proxy.
    Try setting it here if you can't connect to S3 directly
    HTTP Proxy server name: 
    
    New settings:
      Access Key: S3user1
      Secret Key: S3user1key
      Default Region: US
      Encryption password: 
      Path to GPG program: /usr/bin/gpg
      Use HTTPS protocol: False
      HTTP Proxy server name: 
      HTTP Proxy server port: 0
    
    Test access with supplied credentials? [Y/n] n
    
    Save settings? [y/N] y
    Configuration saved to '/home/ceph/.s3cfg'
    

    Edit the config for s3cmd and the entries below

    ceph-client> $ vi /home/ceph/.s3cfg
    

    host_base = rgw-node1:8080
    host_bucket = %(bucket)s.rgw-node1:8080

    List the containers.

    ceph-client> $ s3cmd ls
    2017-03-17 11:55  s3://container-1
    

    Create a container

    ceph-client> $ s3cmd mb s3://s3-bucket
    Bucket 's3://s3-bucket/' created
    

    List containers again

    ceph-client> $ s3cmd ls
    2017-03-17 11:55  s3://container-1
    2017-03-17 12:06  s3://s3-bucket
    

    Create a test file and upload to the bucket

    ceph-client> $ base64 /dev/urandom | head -c 10000000 > dummy_file2.txt
    ceph-client> $ s3cmd put dummy_file2.txt s3://s3-bucket
    upload: 'dummy_file2.txt' -> 's3://s3-bucket/dummy_file2.txt'  [1 of 1]
     10000000 of 10000000   100% in    0s    57.26 MB/s  done
    

    List the contents of the bucket

    ceph-client> $ s3cmd ls s3://s3-bucket
    2017-03-17 12:06  10000000   s3://s3-bucket/dummy_file2.txt
    


© Lightnetics 2024