Free Snowflake SnowPro Advanced Data Engineer Exam Braindumps (page: 6)

A Data Engineer wants to check the status of a pipe named my_pipe. The pipe is inside a database named test and a schema named Extract (case-sensitive).

Which query will provide the status of the pipe?

  1. SELECT SYSTEM$PIPE_STATUS("test.'extract'.my_pipe");
  2. SELECT SYSTEM$PIPE_STATUS('test."Extract".my_pipe');
  3. SELECT * FROM SYSTEM$PIPE_STATUS('test."Extract".my_pipe');
  4. SELECT * FROM SYSTEM$PIPE_STATUS("test.'extract'.my_pipe");

Answer(s): B



Company A and Company B both have Snowflake accounts. Company A's account is hosted on a different cloud provider and region than Company B's account. Companies A and B are not in the same Snowflake organization.

How can Company A share data with Company B? (Choose two.)

  1. Create a share within Company A's account and add Company B's account as a recipient of that share.
  2. Create a share within Company A's account, and create a reader account that is a recipient of the share.
    Grant Company B access to the reader account.
  3. Use database replication to replicate Company A's data into Company B's account. Create a share within Company B's account and grant users within Company B's account access to the share.
  4. Create a new account within Company A's organization in the same cloud provider and region as Company B's account. Use database replication to replicate Company A's data to the new account. Create a share within the new account, and add Company B's account as a recipient of that share.
  5. Create a separate database within Company A's account to contain only those data sets they wish to share with Company B. Create a share within Company A's account and add all the objects within this separate database to the share. Add Company B's account as a recipient of the share.

Answer(s): A,B



A Data Engineer is trying to load the following rows from a CSV file into a table in Snowflake with the following structure:





The engineer is using the following COPY INTO statement:



However, the following error is received:

Number of columns in file (6) does not match that of the corresponding table (3), use file format option error_on_column_count_mismatch=false to ignore this error File 'address.csv.gz', line 3, character 1 Row 1 starts at line 2, column "STGCUSTOMER"[6] If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option.

Which file format option should be used to resolve the error and successfully load all the data into the table?

  1. ESCAPE_UNENCLOSED FIELD = '\\'
  2. ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE
  3. FIELD_DELIMITER = ','
  4. FIELD_OPTIONALLY_ENCLOSED_BY = '"'

Answer(s): D



A Data Engineer is working on a continuous data pipeline which receives data from Amazon Kinesis Firehose and loads the data into a staging table which will later be used in the data transformation process. The average file size is 300-500 MB.

The Engineer needs to ensure that Snowpipe is performant while minimizing costs.

How can this be achieved?

  1. Increase the size of the virtual warehouse used by Snowpipe.
  2. Split the files before loading them and set the SIZE_LIMIT option to 250 M
  3. Change the file compression size and increase the frequency of the Snowpipe loads.
  4. Decrease the buffer size to trigger delivery of files sized between 100 to 250 MB in Kinesis Firehose.

Answer(s): D



What is a characteristic of the operations of streams in Snowflake?

  1. Whenever a stream is queried, the offset is automatically advanced.
  2. When a stream is used to update a target table, the offset is advanced to the current time.
  3. Querying a stream returns all change records and table rows from the current offset to the current time.
  4. Each committed and uncommitted transaction on the source table automatically puts a change record in the stream.

Answer(s): B



At what isolation level are Snowflake streams?

  1. Snapshot
  2. Repeatable read
  3. Read committed
  4. Read uncommitted

Answer(s): B



A CSV file, around 1 TB in size, is generated daily on an on-premise server. A corresponding table, internal stage, and file format have already been created in Snowflake to facilitate the data loading process.

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

  1. Create a task in Snowflake that executes once a day and runs a COPY INTO statement that references the internal stage. The internal stage will read the files directly from the on-premise server and copy the newest file into the table from the on-premise server to the Snowflake table.
  2. On the on-premise server, schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a task that executes once a day in Snowflake and runs a COPY INTO statement that references the internal stage. Schedule the task to start after the file lands in the internal stage.
  3. On the on-premise server, schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a COPY INTO statement that references the internal stage. Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal stage.
  4. On the on-premise server, schedule a Python file that uses the Snowpark Python library. The Python script will read the CSV data into a DataFrame and generate an INSERT INTO statement that will directly load into the table. The script will bypass the need to move a file into an internal stage.

Answer(s): B



A company is using Snowpipe to bring in millions of rows every day of Change Data Capture (CDC) into a Snowflake staging table on a real-time basis. The CDC needs to get processed and combined with other data in Snowflake and land in a final table as part of the full data pipeline.

How can a Data Engineer MOST efficiently process the incoming CDC on an ongoing basis?

  1. Create a stream on the staging table and schedule a task that transforms data from the stream, only when the stream has data.
  2. Transform the data during the data load with Snowpipe by modifying the related COPY INTO statement to include transformation steps such as CASE statements and JOINS.
  3. Schedule a task that dynamically retrieves the last time the task was run from information_schema.task_history and use that timestamp to process the delta of the new rows since the last time the task was run.
  4. Use a CREATE OR REPLACE TABLE AS statement that references the staging table and includes all the transformation SQL. Use a task to run the full CREATE OR REPLACE TABLE AS statement on a scheduled basis.

Answer(s): A



Viewing page 6 of 16
Viewing questions 21 - 24 out of 143 questions



Post your Comments and Discuss Snowflake SnowPro Advanced Data Engineer exam prep with other Community members:

SnowPro Advanced Data Engineer Exam Discussions & Posts