Question:- At what age is choliesterol screening initiated if family hx of dyslipidemia or premature cardiovascular disease?
Answer:- Age 2 year
Question:- What is Ecological Assessment?
Answer:- Evaluation of the child's natural environment
1) Allows the OT to consider how the environment influences performance and to design interventions that are easily implemented in the child's natural environment.
Question:- What is Family Centered Treatment?
Answer:- It involves to keep a family's needs, goals, routines, and expectations at the forefront of intervention.
Question:- What is age at which sensorimotor play predominates?
Answer:- Birth - 6 months
Question:- What is age at which picking up quiets child, shows pleasure when touched, and relaxes and smiles when held.?
Answer:- Birth - 6 months.
Question:- What is the age , they use a variety of palmar grasping patterns, brings objects to mouth, and plays with hands at midline?
Answer:- Birth - 6 months.
Question:- What is the age, do PEDS lift head, raises trunk when prone, sits propping on hands, and rolls from place to place?
Answer:- Birth - 6 months.
Question:- What is the age, do PEDS repeat actions for pleasurable experiences, bangs object on table, and searches with eyes for sound?
Answer:- Birth - 6 months.
Question:- How to connect the Databricks with Apache Superset?
Answer:-
First of all apache superset does not support builtin driver for databricks , for this we need to install sqlalchemy driver
Use the following command in superset docker location
docker-compose exec superset pip install databricks-dbapi
docker-compose restart
Question:- How to Connect Databricks to BigQuery?
Answer:- you need to first some configuration at GCP level
table = "bigquery-public-data.samples.shakespeare" df = spark.read.format("bigquery").option("table",table).load() df.createOrReplaceTempView("shakespeare")
Question:- What is the connection string for Databricks with Apache Superset?
Answer:-
First of all apache superset does not support builtin driver for databricks , for this we need to install sqlalchemy driver here is the connection string for data bricks and apache superset
databricks+pyhive://token:{token value}@{host url}:443/default
also need to provide the protocol
{"connect_args":{"http_path":"sql/protocolv1/o/xxxxxxxx"}}
Question:- How can you create a Databricks private access token?
Answer:-
1) Select the “user profile” icon in the top right corner of the Databricks desktop.
2) Select “User setting.”
3)Go to the “Access Tokens” tab. Then, a “Generate New Token” button will appear. Simply click it.
2) Select “User setting.”
3)Go to the “Access Tokens” tab. Then, a “Generate New Token” button will appear. Simply click it.
Question:- What is a Vault for Recovery Services?
Answer:- Azure backups are kept in the Recovery Services Vault (RSV). Using RSV, we can quickly customize the information.
Question:- Can we reuse code in the Azure databricks notebook?
Answer:- We reuse the code from the azure notebook, we should import it into our notebook. We have two options for importing it.
1) In case of the code is located in a different workstation, we must first build a component for it and then integrate it into the module.
2) In case of code is located in the same workstation, we may import and utilise it quickly
1) In case of the code is located in a different workstation, we must first build a component for it and then integrate it into the module.
2) In case of code is located in the same workstation, we may import and utilise it quickly
