Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

avi mjeg decoding windows 1803 mfmjpegdec dll issue ?

$
0
0

Hi,
I' have an issue using an avi Mjpeg file with the last update of Windows 10.

Last time I have tested my application, it's worked well with Windows 10 version 1709.

With the last update 1803. it's not working.
My video AVI has a resolution of 720x576 (Interleaved frames).

I use format MFVideoFormat_YUY2 for decoding,

With the Windows version 1709 , when I call IMFMediaBuffer lock function the pcbCurrentLength
parameter return the value  829440.
This is correct ( 720x576*2). (*2 for YUV decoding format).


The same programme with windows version 1803 the return value is 414720.
And output datas are set to zeros.

I suspect an issue on the mfmjpegdec dll .

Regards,

Joël


Custom Media Sink Help needed (MF_E_TRANSFORM_NEED_MORE_INPUT)

$
0
0

Hello,

I am trying to create a custom media sink (and stream sink) for video playback in OpenGL using C# and MediaFoundation.NET.
My issue is that my Stream Sink never receives any samples.
My testing scenario is a simple mp4 file, MediaSession with my video sink and default audio sink directly connected to the Media Source and resolved with the TopologyLoader.

Using MFTrace i have found that somewhere in the topology an MFT keeps raising this error:
MF_E_TRANSFORM_NEED_MORE_INPUT.

I have setup my stream sink to only accept RGB32 samples, so I suppose that in the topology some conversion from h.264 YV12 (or similiar) to RGB32 happens. However, I cannot pinpoint which MFT is failing (the decoder, the color transform, or some other).

Sidenote: I have no idea how the topology looks in the end because I am not familiar with MFTrace Output and the article 'Automating Trace Analysis' from the Media Foundation Blog references cmd files that are not available anymore. If anyone has them (or maybe MS themselves) I would appreciate if they would re-upload them somewhere.

If any more details are required, I can provide full traces or snippets.

Thanks in advance,

John

Update

I have programmatically iterated through the completed topology, and this is what I found (for the video path):
Source -> (Video, h264) -> MFTransform -> (Video, NV12) -> MFTransform -> (Video, RGB32) -> My Media Sink

Still have no idea how to find wich MFTransform is giving the error.



How to determine Audio stream index in source reader OnReadSample call back function

$
0
0

I'm writing an application to create mp4 video from camera and microphone using Media Foundation.

I do step as following:

1. Aggregate camera source and microphone source using MFCreateAggregateSource

2. Open aggregate source reader with asynchronous callback function

2. Add Video stream and audio stream to SinkWriter and call begin writing. 

3. Read sample from source reader with stream index is MF_SOURCE_READER_ANY_STREAM

4. In OnReadSample callback function, write received sample  to SinkWriter

During my test, "dwStreamIndex" of video stream is always zero, but "dwStreamIndex" of audio stream is not stable.

(I tested in my PC, it's "1", but with other computer, it is "2").

please tell me why "dwStreamIndex" of audio stream is not stable ? and"dwStreamIndex" depends on what ?

unresolved external symbol _MFEnumDeviceSources@12

$
0
0

Hi,

I'm new to windows MF / C++ programming and i have a basic code that lists capture devices. But the linker still can't resolve the only 2 Media Foundation functions called in the code. But, the headers of the MF functions are well included in the code and the libraries of the linker are well pointing on the lib files of the headers. So, what is wrong with the code or the configuration? Thanks.

// ListCaptureDevices.cpp : Defines the entry point for the console application.
//

#include "stdafx.h"
#include <windows.h>
#include <new>
#include <windowsx.h>
#include <d3d9.h>
/* include Media Foundation main library */
#include <mfidl.h>
#include <Mfapi.h>
#include <iostream>
#include <string>
#include <cstdlib>
#include <mfreadwrite.h>
#include <mferror.h>
#include <strsafe.h>



using namespace std ;


void WINAPI manage_hresult_s_ok(HRESULT, string) ;

void WINAPI manage_lresult_s_ok(LRESULT result, string) ;

// manage return values that should be S_OK
void WINAPI manage_hresult_s_ok(HRESULT result, string message){
   if (result != S_OK) {
     cerr << message << endl ;
   }    
}      
void WINAPI manage_lresult_s_ok(LRESULT result, string message){
   if (result != S_OK) {
     cerr << message << endl ;
   }    
}      



int _tmain(int argc, _TCHAR* argv[])
{
	 unsigned int nbListedMediaSources ;
     // the interface defining the criteria store
     IMFAttributes * mediaSourcesCriteriaAttributesInterface ;
     manage_hresult_s_ok(MFCreateAttributes(&mediaSourcesCriteriaAttributesInterface, 0), "error creating attributes")  ; // initial size at 0,
     // set the criteria as video capture device type
     mediaSourcesCriteriaAttributesInterface->SetGUID(MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE, MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID) ;
     IMFActivate *** mediaSourcesActivationObjects = 0 ;
     // list all video capture devices avalaible
     HRESULT listing_result = MFEnumDeviceSources(mediaSourcesCriteriaAttributesInterface, mediaSourcesActivationObjects, &nbListedMediaSources) ;
     manage_hresult_s_ok(listing_result, "error while getting the capture devices list") ;
     cout << "found " << listing_result << " capture devices" ;
     /*
     !!! DONT FORGET
     should (once done with IMFActivate pointers from listed media sources) release the pointers
     and call CoTaskMemFree to free tha array memory which was automatically allocated
     */
	return 0;
}

MFCreateSinkWriterFromURL creates mp4 file with wrong duration

$
0
0
Hello,

I'm using MFCreateSinkWriterFromURL to create an mp4 file from an existing audio/video stream. By existing, I mean the capture is started before the sink writer and the stream is already sent to a remote host.

What I try to achieve is to record the stream sent to the remote host into an mp4 file. So I use the MFCreateSinkWriterFromURL to create a SinkWriter, I wait for the first I-Frame by checking the attribute MFSampleExtension_CleanPoint, and then I start the recording with IMfSinkWriter::BeginWriting.

The problem is that the duration of the mp4 file depends on the duration of the stream that comes from the webcam.

So for example, if I start the webcam capture, wait for 3 minutes. Then start to record the stream into an mp4 file and wait for 30 seconds. The length of video (as displayed in VLC and ffplay) is 3 minutes and 30 seconds (and it should be 30 seconds).

I tried to reset the SampleTime on the IMFSample and make it starts at 0. I also tried to set the MFSampleExtension_Discontinuity attribute on the first video IMFSample. I have logged into a text file the duration and timestamp of all samples that should be written into the mp4 file, and it seems right.

Any idea on what could be wrong? Thanks for your help.


Why is audio loosing one millisecond after resampling with the Audio Resampler DSP from or to 44.1 Khz ?

$
0
0

Hello

Maybe i should ask the following questions in the Audio Pro forum, because i have the feeling that this is related to audio resampling formulas in general, but i will give it a try over here first.

I would like to know how to properly allocate an output buffer for the Audio Resampler DSP when using a 44.1 Khz format. I mean, i got the resampler working, but my calculated output buffer size is bigger than what the resampler puts out. Well, it works, but the rest of the buffer is unused then ( partial ). When resampling audio from or to 44.1 Khz the output data coming out of the Audio Resampler DSP is 1 millisecond less than the input data. I have several questions now:

1. Why is that ?

2. Would it not desynchronize audio ?

3. How to calculate the output buffer size for 44.1 Khz resampling ?

4Can we use MFCreateMediaBufferFromMediaType to allocate an output buffer for the Audio Resampler DSP ?

->

To question 3 i have to say that i tryed it for a few days but it always gives back a completely different buffer size than what the Audio Resampler DSP renders to the buffer.

Would be nice if someone could answer how to correctly allocate an output buffer for the Audio Resampler DSP when using 44.1 Khz.

Regards,

Francis






headphone sound is broken

$
0
0

i accidently put echo on my headphone in settings and I dont Know how to remove echo from it...

sorry for my bad English but i dont Know how to fix my headphones to not have echo in it...

can i have any help?

Decoded file runs too fast

$
0
0

Hi,

i have a problem rendering my video file. My application is capturing samples from a webcam,

decoding these samples/images(NV12 -> RGB32) for further processing and then encodes them (H264) before writing to disk.

My main problem is when i load this encoded video file back into my application, the frame rate is off meaning the video is playing way too fast.

I am using SourceReader(Asynchronous) to do the acquisition/decoding and SinkWriter for encoding and writing to disk.

I have tried do set all attributes so that the media type matches the one who was used for writing the video file in the first place but still no luck. Am i responsible to do the timing? Any tips what i could do?

When i load the file with MediaPlayer or VLC everything is fine.

Greetings,

Dosfried


Is it possible to have a new video frame available callback in Media Foundation?

$
0
0

I am looking into Media Foundation and have found a MediaEngine example that works in frame server mode but the only way I can see of finding when a new frame is available is by calling OnVideoStreamTick.

Is there another way to play back a video file where it uses a callback or event to tell me when a new frame is available instead of polling with OnVideoStreamTick?

I am aware that I should be calling TransferVideoFrame on Vsync, but I don't want to do it that way.

I have found that it is possible using Windows.Media.Playback in a UWP app and use the VideoFrameAvailable event, but I didn't want to do it that way either as I am writing a desktop app.

Please could someone provide an example of IMFMediaEngineSrcElements

$
0
0

Please could someone provide an example of how to use IMFMediaEngineSrcElements ?

What withelds Media Foundation from playing HLS streams above 1080p?

$
0
0
The HLS file I'm using supports resolutions from 360p, 480p, 720p, 1080p, 1440p and 4K but Media Foundation starts with one fragment of 360p and then only plays 1080p, it does not play 4K even though all 16, 1080p fragments are being downloaded within a second. I expected Media Foundation to choose the 4K video fragments because of the internet speed but apparently it doesn't. When I supply a .m3u8 playlist with only a 4K video it simply plays it like it should. What other factors take part in making Media Foundation decide which video quality to play? And how would I allow a higher quality video if clearly it should have no problem playing it?

FLAC metadata/tags being read incorrectly in Windows Media Player on Windows 10

$
0
0

I recently converted my lossless music to FLAC now that Windows 10 supports it natively. But when viewing these FLAC files in Windows Media Player, the year of the songs show up as "unknown year" despite having the year correctly labelled. I've noticed that compared to mp3 files, when you go to the details tab in the properties of a FLAC file, there's an extra tag option called "Date released" below the Year tag which when filled in allows the year to be displayed within WMP, although this is quite a tedious solution.

From what it looks like, WMP isn't able to read the Vorbis tag DATE. There's also a problem with the Artists name not showing up in the contributing artist section, probably for the same reason. I'm not sure how well WMP is able to read the vorbis tagging system but the ability to do so seems to be poorly implemented.

On a side note, there doesn't appear to be any standard windows media player FLAC icon for when WMP is the default media player, instead it uses the M4A file.




MFT support for multiple input for N:1 composition

$
0
0

Hello, 

Can anyone point me to a article or documentation that shows whether an MFT supports multiple input streams. We would like to implement N:1 video composition. 

Thanks

IMFSinkWriter: set ICodecAPI parameters

$
0
0

Hi,

I`m trying to use the IMFSinkWriter API to encode a H264 video file. Starting with the example from Microsoft Sink Writer Tutorial it was relatively easy to create a first mp4 video file with the expected video stream. Very cool.

Now I want to configurate additional encoder parameters (H264 encoder parameters)  using the ICodecAPI interface.

ICodecAPI* pCodecApi;
HRESULT hr = pSinkWriter->GetServiceForStream (streamIndex, GUID_NULL, __uuidof (ICodecAPI), (LPVOID*)&pCodecApi);

Among other things I want to enable CABAC which seems to be disabled by the Microsoft H264 encoder - even if you

specify the high profile parameter (eAVEncH264VProfile_High).

hr |= pMediaTypeOut->SetUINT32 (MF_MT_MPEG2_PROFILE, eAVEncH264VProfile_High);
VARIANT var = {0};
var.vt = VT_BOOL;
var.boolVal = VARIANT_TRUE;
HRESULT hr = codecAPI->SetValue(&CODECAPI_AVEncH264CABACEnable, &var);

The SetValue method returns S_OK.

But the CABAC entropy coding is still not active. I`m using the tool MediaInfo to check the parameters in a video file.

I tried other parameters like the maximum number of reference frames "CODECAPI_AVEncVideoMaxNumRefFrame" which is always 2 - the default value of this parameter - even if I set the parameter to 1 or 0. 

VARIANT var = {0};
var.vt = VT_UI4;
var.ulVal = 1;
hr = pCodecApi->SetValue(&CODECAPI_AVEncVideoMaxNumRefFrame, &var);

In another attempt I read out the previously set values again - and the values are the one I set before. Everything as expected.

hr = pCodecApi->GetValue (&CODECAPI_AVEncH264CABACEnable, &var2);

I don`t know where my mistake is to set these additional parameters.

Does anyone of you have a working example setting such additional ICodecAPI parameters for a Microsoft encoder?



MFSampleExtension_DeviceTimestamp does not contain documented QPC value

$
0
0

When capturing using AVstream (i.e connecting a USB camera) and reading samples using Media Foundation, the MFSampleExtension_DeviceTimestamp attribute does not contain a raw QPC value. The documentation for this attribute states: 

"This attribute is set on media samples created by a media source for a capture device. This attribute carries the non-adjusted value of the query performance counter (QPC). This attribute is available for MFTs inserted into the capture pipeline.
To get the time stamp relative to the start of streaming, call the IMFSample::GetSampleTime method."

The timestamp value I get is the same as the one reported by GetSampleTime, which is a ref-time and not a raw QPC value. The raw QPC value at the same time is a completely different value on my computer. 

Is it the documentation that is wrong, or is the AVstream capture driver faulty?

Regards Björn


Mutl-input and multi-output

$
0
0

Hello:

I am writing a media source that output 2 streams. One is a video stream and another is a audio stream. When start play, I got a error code from session get event (MEError, E_FAIL -> no any meanful information). But if I Deselect any one stream, another stream will play normally. so what is the problem about this issue ?

About other similar issue. I write a MFT which accept two streams from 2 media sources and then output a single stream to audiorenderer. After set topology and play it, I got a error about "not support topology". I don't know what is the problem ? It is caused by 2 media sources or multi-input MFT ?

Thanks

 

 

 

MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK differs from dbcc_name in DEV_BROADCAST_DEVICEINTERFACE

$
0
0

I am using WMF video device in my application.

As WMF does not behave the same under Windows 7 and Windows 10, it seems that MEVideoCaptureDeviceRemoved is never sent to the application under Windows 7.

To solve this issue, I configured a window message handler that detects USB disconnect, as stated by the following sample code provided by Microsoft:

https://msdn.microsoft.com/en-us/library/windows/desktop/dd940328%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396

This work very well under Windows 7.

However under Windows10, it seems that the symbolic link given by MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK is different at the end from the one provided by the dbcc_name in DEV_BROADCAST_DEVICEINTERFACE. The GUID at the end of each symbolic links are different, preventing _wcsicmp in the sample code to return true. Is it a bug?.

e.g:Windows 10:

DEV_BROADCAST_DEVICEINTERFACE: \\?\usb#vid_06f8&pid_300d&mi_00#7&2a44964d&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global

MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_SYMBOLIC_LINK: \\?\usb#vid_06f8&pid_300d&mi_00#7&2a44964d&0&0000#{d288359f-6d1c-4148-b54b-d998e2b8f7f1}\global

How to extract R8_UNORM and R8G8_UNORM from an NV12 texture, preferably with no copy?

$
0
0

I have a Direct3D11 Texture2D / DXGI surface with a pixel format of NV12

Is there a way to get the Y Luminance R8_UNORM portion of it in another texture? preferably shared without copying.

And the same for the Chroma part?

I'll be happy if I could create 2 Direct2D bitmap planes from the texture.

I don't want to change the pixel format to RGB.

I want the textures to stay in GPU memory without copying them over the bus.

If that's not possible, can it be done with a copy?

If I lock the bits and create a D2D bitmap from pointer in a raw fashion, will it copy over the bus?

Questions about the audio resampler

$
0
0

Hi,

Is the resampler capable of resamping a signal obtained at 250kHz down to 192kHz?  I think it is since it works with a wave file which has a sampling rate of 250kHz and is converted properly by NAudio which can use this resampler.  So I have a couple of questions.

My samples are delivered as doubles and there are two separate buffers of 16384 doubles, one for each channel.  The samples are delivered continuously. What is the expected output buffer size?  Can one specify 16384 as the size and the resampler will deal with the incoming buffers until the output is created?  How should I manage the two separate buffers?  Interleave the channel data?  Also is low pass filtering done?  Is the performance good enough for realtime?

 Thanks for any insight.

Tom

Memory leak in IMFMediaEngine when playing any file containing audio

$
0
0

This leak happens when loading normal MP4 files with AAC audio tracks.  If I strip out the audio track then there is no leak.  If I strip out the video track and leave the audio track, the leak remains.  If I just load an audio file (eg MP3) then the leak is present.

Loading an MP3 file, the leak is about 350KB per load.

The leak doesn't happen if I comment out the file mediaEngine->SetSource(L"...");

My system is:

Windows 10, 1803 (17134.345)

I stripped the program back to this very simple case:

void
TestLeak()
{
	while (true)
	{
		ComPtr<IMFMediaEngine>           mediaEngine;
		HRESULT hr = S_OK;
		{
			hr = MFStartup(MF_VERSION, MFSTARTUP_NOSOCKET);

			ComPtr<IMFMediaEngineClassFactory> spFactory;
			ComPtr<IMFAttributes> spAttributes;
			ComPtr<MediaEngineNotify> spNotify;

	
			if (hr == S_OK)
			{
				// Create our event callback object.
				spNotify.Attach(new MediaEngineNotify());
				if (spNotify == nullptr)
				{
					hr = E_FAIL;
				}
				else
				{
					//spNotify->SetMediaEngineNotifyCallback(this);
				}
			}

			if (hr == S_OK)
			{
				// Set configuration attribiutes.
				MFCreateAttributes(&spAttributes, 1);
				spAttributes->SetUnknown(MF_MEDIA_ENGINE_CALLBACK, (IUnknown*)spNotify.Get());
			}

			if (hr == S_OK)
			{
				// Create MediaEngine.
				hr = CoCreateInstance(CLSID_MFMediaEngineClassFactory, nullptr, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&spFactory));
				if (hr == S_OK)
				{
					DWORD flags = 0;
					flags |= MF_MEDIA_ENGINE_DISABLE_LOCAL_PLUGINS;
					//flags |= MF_MEDIA_ENGINE_FORCEMUTE;
					hr = spFactory->CreateInstance(flags, spAttributes.Get(), &mediaEngine);
				}
			}
		}

		{
			hr = mediaEngine->SetSource(L"R:/Products/AVProVideo/trunk/Unity/Assets/StreamingAssets/AVProVideoSamples/BigBuckBunny_360p30.mp3");
			Sleep(2000);
			//hr = mediaEngine->Load();
			//Sleep(1500);
			//hr = mediaEngine->Play();
			//Sleep(2500);
			hr = mediaEngine->Pause();
			hr = mediaEngine->Shutdown();
			hr = mediaEngine.Reset();

			hr = MFShutdown();
		}
	}
}

I hope someone at Microsoft can test this and open up a ticket/case so that this can get resolved.

Thanks,


Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>