![]() ![]() *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -: index 9223372036854775807 beyond bounds ' Val tod = spark.Path: /Applications/Postico.app/Contents/MacOS/PosticoĪnonymous UUID: 50036C17-D901-A476-CBFF-A7C9F12D837D Timestamps_df.createOrReplaceTempView("timestamps") Val hour = spark.sql("select getCurrentHour(payload_MeterReading_IntervalBlock_IReading_endTime) as hour from df") I taped these lines ! training.createOrReplaceTempView("df") My datset contains a timestamp field and I need to extract the year, the month, the day and the hour from it. In Spark < 1.5 you should be able to use these with expr and HiveContext." ![]() In Spark < 1.5 you should be able to use these with expr and ma found a similar solution here. In Spark < 1.6 you'll have to use use something like this: unix_timestamp($"dts","MM/dd/yyyy HH:mm:ss").cast("double").cast("timestamp") Assuming you have following data: val df =Seq((1L," can use unix_timestamp to parse strings and cast it to timestamp import .functions.unix_timestamp "You can use date processing functions which have been introduced in Spark 1.5.
0 Comments
Leave a Reply. |